var/home/core/zuul-output/0000755000175000017500000000000015137301463014530 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137306266015502 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000245455715137306113020274 0ustar corecoreK}ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD . 泔i.߷;U/;?FެxۻfW޾n^X/ixK|1Ool_~yyiw|zxV^֯|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n, y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL +wfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O 'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { Ov8FHӜ"D$aǽO8'1lfYuB!!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"?*׉#YҫK RTn|RKm;ԻZ3)`S ׈| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4AT%41Oa3{$]oȮW|_*[ѳ@4ml8E0֌lU=dCJI1ƍۃ 8!sc (@l % foMb_$_|A!0~_u BJƕ免(JݮJ7@(J$\RI~)bԢ6eU[SYIr< '}U75{qRZeJ,y }oՌqu3ZE"(_OML&{Wͱ@U]i|\WWAzs/|j:^XDN鉉 k\TH/q&q_'9Q_y))w nSRGn0pƗeVhYӗW \Y2Z{!w{Y\9qOy/+Yv<䍩 /+Xʎ ͈YAW\V9! .wHڑ7l@0m=Kd(D9*N'ݕ,_yrMGHh=4*eMl_FuO|2?d8LM_uu\2<` [y,-H:T/I?_7s38Θov͟yP4WaYﵠ `A420Áϛvֿ\=O!E<^KXo?X&a?EH~]y#xO=-=d![^RroӅ"w*%2kgM [DFb(AyT*E"ADST0%j%_=Q!Jbj7 OPX֨*T^_G0OQoXƣ3F.ד;+Ԣ&Sg- uˇd(g13l SAZiIC6euuR[U#ׯX4_@ (j/K x㝥O0:+.c{@!8)b~A)R?hGv̀lBamoKq EBף~YY pu:36e\_b5 =f=NO3YՕXlw:18i/]_xdgIߵ 8[cc*0i V5eH\ny6\1Ms{u}8QH>F$ Xm߽+8DQå^g"[nE؜ڵ}VJSlwKEcvJUmTז`f̱䥢 (_Ϻe)V~>2Fx*.IDHz;x:%!x*ⴟEA6a-+ ̿R^u>+|!ZNO߿9iS$@̢:rʖSgk$EKrǝbD2@Ԉ]ѯ޽sDfjI_*i4Y%"a]u7WP4XI3n3Bb$qׇI5pD_̸Ty_D""v@ ԬHHe5RpKqݞ\҈[S,VYMépl )DF,ZÃIQI)2y,iP T5cW#TOn>YO&܇*+|u"ugQ͈HinB2_jY:I vNƴ3*LlBO}ԸZu2{ ̱6$8*k=bq`ُso#!R /s3 p~r7SRqծSrz* v_OhF[榔Hx{xYb%zcCTTU\xÌFo@=f̊6Y:׹aaaT_!wB78mL zzi%\^@pHd{2U#o+徿9|vmMDq*JVk6 T6 H5mf3XP\`r϶6# ~zHl_0zlwt>#e b6˃Y-hm뤼@zڔوڗ")݄~6}HH4I}H=~jXN<xCG ojJW֒ƪcRaNDvM nJeԥȪ{睜rtn׭Tui[UR P]_G(j uuCK^Oam'#\]XQHmh n4=k`w6lJF0~ޛ6:%{* .H&\*1 Ad693u%/HUє ͊`sZ+Bɴz02fr%Q2D`YDR֒j8P6,amE-ZP:mVi^^ufxf,]؇.1,]QAݦN-eLuG18K-[[ {m/ E'j|wmxC_KVuWg:a9s uIZlMF$ɡc4:ݦV뮥EU]2S DCC XR9z:v><҈VfG~.&yX {lˊ`svA m' ̀YЏ \|;rzFebܲ쮼 ֠uak%}\r7+!6Quy-^&Bjhi˻VXRuEW%61tȳz)֥PWSغwE J6IW>D66὿_:Dt :p@Wd=gSz]ʖH,[OIdgnHt.eȬ=V3;:~frxC$[W5wtnBOvi(,B5 @K_e'0qaK6!֫ _M=@`"JaKbi]U M VI-O>x˜MnŭCA 3 a]XqxО?_OP̱  ,kO"]aA,k=r_ō(K$?8Z@A;9 A_W}>Vx߁i瀃0W[ cSγAsGC]`ݘLP<q0S;DBA8VG7.2yYeJx7_ixݼ 9bd<A7%%;ߪnjnzY$Ab꫺"^jA̗7JN3?)^)8@j{wrk\~rmbN^-k󷓋[5흽: <ϱٶK!:nτت;^ViaVMIX}J`49{]Dt$DQ&WlM*)՟Pu|A?>Lz}!c9=,l@}#$o @2#WSٹt*0~pR*' ~DE9|ջG#t(*b?딞EQ6H?[ֳXOǻ `6fR>Aޝ;:0]ȿ>'XÜz5;/Indm^Ln׍qy0pbڴ1h\I@qR{z@s :a fb"En ]`INqPn;0uC`Kȵ:rX<ڍ;Fwz@.Vj8!Rwĥg@ #6Dr@Bn$;SqG}ٺa]U$z@X;¶y`n#xmiop{~5 XP Al.uvq X7xNpź[td`h5h3}|c%6( ܮ8ﺁqZ, qBLo/Tq"XCtv7|J`uC |i""_t `4:E N{':ww,׿Ye]r8?Kp 8K[v{h\ڷ"+Gw:]g.}%CpDt,V^Q@aH V<F8F^wtX4Ng[;2[UP B:ǖMa ޞt@lka]8E AW xN'.o(yvۮsw"5"̷C, VZr~>휱#(ZBJP*h]:. BwRxn'(mHlE]x9lWQ<gz˫vFB'vs4ҩ H's~Հ+m:S"wN.D6KT: 09u61YBa `av97#qtjh9P|SHKxӠCQ{sڹ0H렶ٰ:^ZuF"xh`C5F)ryQMWq9р@w$dy4X|;f^ dJpGbI*Iݹ![T+2t^FŃD/֕` T#/4/FYAE>&F6*lЯB_w37c)joU <'BRUuts$JC",iZ\Œ[kRPH6ĎNpHקT{4+Ĝrפ6NVJ.?ZQ2㘌JCV:Z`Z\S( X1*p>19@+oS9:M$yf"ilia("=0( ,/d]Ԩ'724$MQ6-I Tw3f!/lfo/&ٍZ.!S4LBGIJ́`qU!9mdf2Avz3Yuڶ/ʾ)z_|!-[z,˯/?!I xJ,(Vy(5D0Ϛ_p`HibQ~ =XUjݑ-tG-ͻͶ;tURq,G䖑B l4*tǜ*p[Ń`l R-S Z$:Jfx4ӌUi2M݁Q>f mUJ+&x-'ǯJG#&Qd܍LM9"x'M$&]2̴ImSz&,"AYPyfY p?r7 ('X%J^DDh\/ߌïcߧ9+aR;/BJ~29,`6K2 9/\ZU1UAV]Nz?7c/VZ)v]]06a }X+jjP;V(<[. `mԶ.bɚ~7FgD QY`FݰE b؎Y_X[d|Os\v$ɋ8Ó||LuKVQ ]Hp4S$2Rb_~A nu_q|&~?9 |DJuSlY~45XЪί3< ?EӀ ;+ -Fߠ"L]0b2g3+pҁg1Na69`3ACkCtoH<耛KMxV`Q'syɀ|gVяYԇDh邡MkTVp#J+ih&,Ժ|<^fb6>ǃrZO*r]"kU>~Iq`0\3ף-MW[jz6C>_[ՈٖNwkG'z_N8>ïx6BE`vٞ[Q.4N XsHHMAa8*axC+ &Qh'DqʚCP8ꬰTyrC-ya9vdkBVocaF gA#(A,P0+]6f0u܁A[rn2n: ah=LĄM9.@I(Ҭ=%?4 };O|&i[mI-Vr*x`q<>MG! 6AH1[MInsJ{9ӊJ$hPy莹!$ B^[]#([ (]Pqwi],-AXy'JwZAPkwA j=QPkAeAw~wYAPgwA : >YH)=V\eZHj^}5Ĕv[aکx_ gTݣͽC{+wWffQ6IӲNk4#oPi)yMb+szAlʔ3 9&% q(]{p~DUؑ*w6wj>FVM.qJA"l( ]ڧA?/4gE#EA(n"[7NMMϱH9Mc`2 Ha9<:fK0{x|7x%_Eu"i<# !xYDr M}xeT񽎄He/Oz(~ +%Q r- MMM[Jx~zzrsao#4 xh L_N/ qZV?DcM*y [:L:W|H 'tGUUXy*.,YB+m]B_ze "N*.`ΘHrF8Fax:8c/ޤ㴈F&W+ˡ'T^o=qe%GX_ ge^3BA;}ǛhOFs!摈YaUC[P])"ΐ+N2Fv Y "9Q@X/[ 9dȡHP&\eZ":;̸4D(ak&gGZE)DZ!pƴ|†usPá+,Py)/a'B"VXx0&!&`v ClT JiXB/ҩC@\SS \H; R:D%POO=5jGbH-L*>jEͼ`O 4$CI<-wąEګ0y~(H%1L] =z y|PSQyTKՠ>ЌT7q}WtE[5{~ \\}KpFRܙWGotv#z>bՔg< Vs0׫C#.23W KֳX1^qc~:%Q.V/PW 8#8 V/|`t7D N=@`#).'`WZU$ĭVfYJ,j 18 Q 7}q./MC 8iC;_kQW0T"暈$i2,52 ⱽde\fzp$.*tC`ڨ$T C[Tϴ:+M 1=HxQ3jaqԯ鷝Pso \ 8;ᆪjF6;0>\7lt C1=B/2%>upG[%,Ex{A5U"-u5~Rk2ÿ);&8 `w-> یy @]6a ԎYMU jʯp=Bυy@Ðj!!(?!!sC@ Ppa97~i!n"W'ѪQFY]e5 I]usVT"<ƨChܾyCU6 Pun dtq͕5rӥ'_>Y?xOvzv1rjCy¬5n r [Rz$J`E$";QPF i+BjHg)(ƜWI&6 g)s45eHFШ6OqYHiՐ5$saMjrI%&Pyg­T^o*d~LU Nrz72 [DGmqmgteGw- AnSw#`;b qIL(A*n%n"Cc_x 8zÌDGi:tY 'Zb$EECIq`$B{~=pW;#Z{:_ YY`1}`>@?%FIh$%ވYAtQu~8]j#/7zl%~:cTU77ܛ_~gi"ɐ͏6^nGwn~ ;~_V~u׃OP_zN鿺iɇ'Lq|~дҽi1sL B&_>7h?}7vW8jyDiA A%!Z x?""ʉf%E&Âx|:oX £硔/ˊ~ G$xkp V;h؄[nXX +C(1KX6T"J[=ly$THڤR-WFT-^ k) C:MmupU1d)QԤ !Tl> G'&rffeB:H"S rYi-%rnH3#Ih"–l׺$/:h$1fVrP%0fUB*pҾ?J譅JY޳C2HVO]ӎ8ԡ*R bR:՘i8R$]hnͮ#9Jp"*0KԢ T_ǨCQB'i8R$]fG=2ƅ hRFM**KHYO5_vZp8oua.8N(FYMJ1& 1wIǺݩSpj!EW'r4e( E <ƃf;]hǺ>It8h<` h /t)J"A#ƚ:+c6)C+jL3. miԎ(pRCoiMs3Y6[ >]8810j.QuфP$u4!vpd|(2+D"^B4 $_bJErj,\]3qӭN$c,茡`nuJ6YHe~ɱɳ6ptmYOV)9^ rI1m]pܩ%S2J|{i-DS" 8$]aߺ.8:8ϋyho=l^䬗${r~e/ƺZ`< ou:!*q_D/1I>Q$ArwrhQG B %T)b^#=LuV/tC- I1_ےhR,胀=8F^JVIN8<~ '<{`GU+YxnjkG 1au&I~\L"?.(aJ͡ycn05`c +u"rQۺ jlJô,{: 8C֍֣8h d x8c"}_АYfF( •VY5&#sV:H֣\H bLtBc2Q29+#9vs @R8eO*zo,݅L!3YH8\)^L<)MU]:(_eShˤP%@s$Ql]j5hZ"*rFu!Ҽm`|,zR!~vFBah@U|Q4hoELjUj@|d$Z=a_4 $j C*~M"Q$bsX'.[إ-~ 3z#Lr{a=؇\!!Q}6]fڭa~VS9e0V?.8K֍LP~*f~'*\ie_QZ9/$]C킣'M%ꎎhMEGӳ?}x!K4L.?yU֚(P)Q%_KVLHQq G^Ľo}·Og)}d`vsVVb'`Ҡsk1l #KBx(x*nr)=:cR{mR FjyϴcDһG[8`_VL9`%{nAa#r^ ͦ,;as-qƴ ^X%B6qV o1}肣} o/U=զCMYSs@ff%iCoi/v 墟փJL1J ؄t߲9D!ouS4]ٗml~VE%h'ܥ wx禌'1՘Vg=r[%D(bM[Ntwnz8q ;e&xA~pꗥjXsUݠ W^;Cs4@gL/vXvϞC S[Ԍ ! eE}׎4{n<\*z v2{03;irX??n(v>֤!<=]ě0착YDZx- FSļ>]_[n#J۟7~{u~Hwi1xLG chTJqE!|Iv ߯=owo jdeE|_8g?4YHjP"7q $YZ$u-rs}B3p@gcJgQyO&Ȕ1R^r4>1(A*8d^jوd <2eA;lmr0嵻ZHXҐt ج4PvFos__3*eփu-P*FdKm!H zSMb.8y N(FVZU1u=0$,@>w"z23tCXyq]c;wH4EJRյRAuK ` qEofv: (S+_2Lp\aH ]/u -7~ŝt?*Hv¦G8V9ywYSmDlIL;)[A3Mp]%}fnMdTV8*=*uAW H8 ˯kVXP(LGql-Wa$X/%$ mb`5EZir7^FLX+CΏITr0#h?I@B%.%1㱱W_ LL)zeo.M'*5l/ Q!zyֱqax>MBm3k7J,x17&#L\O|/ޜ7s:L.kC5(䫋(d?gp/׹D̸Hͪ׳^5XA6L? ѽ.|~) )>^#GY8Z%Gȓ~@w:o.>&S.h&Qr i u-l|Tg6{2P:7T`Qo֨kflfcXslj4drGο1yȮ{Gf\O~f[?&ѥ*?%x:DS(pRamXEB0!;*F\4e^Q~L|p%W(]??>ƿ<%߃읫&Fؕ=!~F2>O+m,l| lp2?C^`$c{Pu9QK]ދV~-Kt0=v 4FD+PW&rBKlpWB @4g3J6& f`~ %4VzKTg{fm#3BPuKRx[0 Ak# |\8C(gFBt^N!5@K2PVNj;׬A83#o\ؔhfCS{90?`Fl fLKcxvuɤZ.`M@r熜¼]|xմo]IjޙK &a1ڙ t [qƵ挗}sYZ~\cr E/i#eJ:8~p>F],7 WLJfȑ}H8U M쪈mR}HAU~!x}m*+Ӯ}lF ?L;?$e2l3j0^{h16P(5v? FN7N𿩽?-/_fg-Yt?=!M=}AILF}Q[ 9șaE"g 3y쑳(.AmxVZ o&l6!@jD :eMJm6`DH ;5ll"ƤUlW[`Zd;ݫk v7lQu- m~XM􊋆Zu L3 v1Ք,1_/HK)gVh0>c4Iߛ',$~?RfG@iE %R qcK/r /1r8a7˧,>Y!t7{K;&rAxe]|W wPA7Dը}uYy (NσIonK:u8S^hoSXݽW -K|hytYᱰ/خI{e'MrrhM?zL(̓RT[) NJ ](wAӕZ,+zh/ K-|W#V!N)7^rn]KP–b,QśVr%Hfťb Jj  |=-6g`oSdG]⿬j;+P+zyQ(Ylbg="ѐ3cc~{?(W'c_׬!ҟoʣ{q7^3n]s Of:<ϚuaOv=vtIܶ|MM/=]kvW.=o%\ [L.Rl).D Z_6;-9^<]D\ڇ~=^Uw$gNFyq_DuT\)\^\Z.vHmK3iI Hy]ppz^yIsiE pXjqQgy]\hkuje* *`l!$FTr`% 1_讨N޽L~_F'W|<fh$**x*jƎ"zjKC@M44 )ma%\@k>ٝ*}2w|)5w?vo]g3W= u HU1E hk;SٜEf~7lU@}5X35rՊHaj*8r7Y؛dyB]D0Fi㕹?&M[ût ԣG J@Q[#Ľ3hn&W*|2逝-MR|Ӊ%xW&q7|MQhRr;eyEWi TLH=YTrٔy N2)E[ry@EVi<lˀ~qr%aUk(]O')DG vE9{a ra 尿1pWWiLQ;S'o{7yVG/,rѩ5ڻZσw {cO!^U|s *Y]7Po-NoOl#|o.F0L#cȾ9^7]8J^GL?zCd%>|=iw::1O{ufQ h`l}n]2f/0+1߸&f ZGD}_هQ ]>phr70W[hO߈⪒{X dqY~U|~\KazY\2"qm?- `c Jr% &[cBjˠmѬ}_~p%$l偾qc]p< o : o@ HXKR ,et|m*3j|*p\d)51yNƔi"'TI2'YcjƋD[ƴ)m\!m ηbf$iigqDkVmi5@=U1b;˼hMN;ʥ4$hʉQiC@8 b4 X(fN9e8x 4VI&b+@$j?x$Z2KX$VߘBu&G(aF[nyL%i(i9ތ`Z6RG˦@g~IO@dDZd&|\P/H5!ٖfd5#QK!/5w<)PLÇ\ I)AbT)AR$W;0"gdu%sFW{w[sYw1j-3"M"QQ#Tk]~ZnחZC(T+[Vb#}D!`B[!0XЅ2nlixl6)\SZ*j>#T GiF.}ABwKL(STT\OF.D:߲hL=dRqnO+2`9Z$1يÚV=l ~ ^pF00vl$q-ZLދ#8cՈ`H.1ő44pTq/i`_׌\QZ< zgܱ)5ஈRcL!;~H~kgϥF.!)612KFMjw026H[ʼnAZ3cyf=b  P:BbpBIFT4%䖥0oɪǼeЬbbkBlPLPXǚ!wM)C(NaUʄBL7̐Ig9`N+A4 CI*jahmn6v tA2iFpH-fYZ9Ds;L-gʳ).X\S/ٹG9ަGw5 I%ޏތ`;U/PH }ixڃWD;2 DH32a$P|k"Q"muRЅe aFcC 3+EBAXZ#GPZ'S"BY3Gp瑋ȥe=jı6#X>Iܭ4Ԫ!T:MȔO{j5"F5#оL-GmC#OS=Ć`nmŴ7^(VpWPxoH>zd*Nz@&q#Xe_/{3j4`q>g]Z=\ 4s7]#)Akfw~]3rjB-xi}¥`LЃ^3E.("^aH!=R@07 ÑH_t9`+iNw{S5x@ Z%.IQ32>) XGGzԲ~I$M3rUzb"DKWWL^Np]+LE^fӉiG}"SЫU%p)r1k2B 1CQW%kErʎ&"O@W/al~(2jM_EϠ3"ya%/ڌz7!zYS&XBKb~^@,0 aIXLa:8Q}0b#¥Gm&d*c;dЌ\y_1!A tiMXv0d)=2-cc XJg6跖3}TC4$:Q\ݱMkRded%c Vӽ~jͭ܉ӚJ6fdE[dꊿ^MqPwܭloIDxJk/2v ;p\ݤv'  P,uZ ~WI+jڗUQq),uI Fw|LBlߚ^ YL1~7zÃFL 0J5GXjJ7 $+*?{+~w.Re-K,O]6>1C _@cڅrhw^axJ_<AY!e^uq <$?z"u1hͮ/Ex( p +mTYULח <͘*}wrڵ7ە)lfŨ{ 8ǴG|VD9s>Ę pxX3 o9"|Ę Tgdp̓X;6^<-`L.Q&d*Dda$ ԺJPi^# 1p!5C6 ᧃQO~0iz:uèT˭ᇒP F&żx|Lн9"ee[ɼ*f] E9Nцّ ;/07;FS6(U4}1CV , IQ}^#QPܯ?9.V5,!{fHT+_e+m`K"zytU=b8+”`+wžE2@tP(e:Bk;0}PL$UW6oդSPHn%0hz! |tr^}]y8dxآ1+6ʻ,)9rl eEm|wAFpAW`ug>Zb%?v|2|^w| O4\>, 9g錟!W132ڄӝ;Mkգr8BcRøϒlg5$:e`bMHP ( FfX8/iZQ~fD3ba7u1L0"YPD$ξ.)r~ d0xkK#_(/A6!6bvҮc:VStWwՙ}SMD\aU_Oe"G? 5u_A@=PEځ-X:%Z9J: xu9DU -i4-lP/߼m6EV3AMOrϊ%0o,HܩPemPFf6q22LFQ\I̳+yC M>oEwӽyYv#Oڍ,u}[xz@Vc?ƣ ="4{Yp &g&1/<%6O'; +XYKʹzZiٺQ6m藪 x1mw3_Y4kI}-]/٤8,9.tYΕkiA$&?L<6QU'g3 en<˦}*ܢ;j9,ǔr%҂G[W.+ɫ]Vr;aY{p7U;:#?YL?I C:vͦebcȋɕy5)PU;蹊Q ыxcs7upެm"cVu)JDy"}|?nwm9\0q:WAf뜥&ϳ3 mwq'ciU=xׯ?jveNEv9v{=@#I..YYRǔFhWfK9 k3Q|1u9b3tj5DFPLC"Θ"pOϖBSOћWşXѓ>)NT,#0ApF0^Ub~¤SSHD"T$C8 V*:WX]S$ˍʄo0a_/vf;vkZ(zzTdYR᣺L,BVkkz|ۊ:[83BL/e*+3Wʧe%ĉUy3z)jX[^ ,nGGe#Xbjtl.9JfhXQX٢㷸2H\!nSDCE8bD0ɐaK,H}};i 5vBqe˜!K"xDg*wpB34,(X_reR@%UǧCnc a򵨘F^x(bVRo"tf0lf5NpNm"gl58ysA|!S>K܌g2ncwoA^sCʿH?%t8J>o?4pۿޅ32s7Vz X#^zDAPq R\L\]k@`{BMC%:~M$y3z0+7`{B(L2 OMCVՊ 'V~O`ӏtCVXjVfP{AObωm$WT_qFDFH*kBws#aO&B5`rދBo!Cq 5o%n/EI D2IC.t^FB0pj$cX@0]'Ƹn'%fu$:?RzaXowf7Ƽ wۿ9B/ b|y,&p9~pCV2d'Z?=2 EOvуDwNeHY6)!Dhqp;DQ2uT"rT+?F,  Mޘ"--fP$R}f*_* &~HNݯq˟˱.< eFل̶Q +%ay~) @QvAKX4_i64Y%e(6v'BUA=)4/d|5́mllldؼb>Y@NphLן❒K$&Rv_oWQ1Mryw1:`! 'F zӘR/el$=fȪWJY@VЀ\`ț3WAٰ@w:,jh4HRgE.d"ٶ1 7sEC) 弿"(1*p F0J0C>ԳdufR6flF?_O4Xb42YYUFD(wIy4Cr0,7[>k_|1~u #?#jzOHtI9p?քj/t{hƺI &&/ClSn 㴲>Ӕ N-B<IHeFǕn +̏ϮBѳO(}WMy"o4A#o{P: k^P -ܡ=e3=+ 2Z)fbۨ2M3&$G/1ͮ?}!fE1^ęRQ .?dAS1y8J|!ɊkB.a\")K\ˋQ2Bucp#d>8V\dq0f=w<]^K!P5! [,.(E7=g_s*{MxC1WVt i}H4բ[]98+ ~T}!s;id-SO+tN?&OP :GA;=薺 8sص OvmRvj{rދBo!zOq{Rr(,y?˟q> }x`DϸDuKCsVϸD9't?lנPSf^]}lssz~\(q;zA%MqCirq1B`8~-^GCIC^_}I6 *3n"Q"[9kfXƄpDŽ@DWog .zeCaޱ0<'˟^ %)SM*u/KA2e9elYH5N2B]fP,8z'IVx~R(LŸ$e1.|)yz hƝ+8u0<$$X`94Ә28LsRvoǏR]7Q7$&+A1na"͏.k)AŘg?<{>/`S@c̻XIe$.$NIQ\[ܗ G5x?+g=ƙ!D;+w_6C x4@='hD1M.|޾Ը@gl>I㫃KpP4FeOH"\|E>K|)yˀ~L9BtO g@ˆ ~׷=Ktn\_HiSCrChQ,zM·K%SH2LJjd A4dV|!o?WWN-Wu5XBH3ijF60z+Hin%`i9#]^komN{R^2X[w'ȅu6ĩëJ寕+z<@0OB t\!?@ Z|mȪuUp:!Bf%#>_C7cYR:UA.1BW|_ I5s-Rΰ;_=y &3FCw3)av4G,QmTYP.]Ya]Y:+#R=Z9iI' VZ'1_wޢ~2 X}M8ĭ ꚑY΀OBb!TPg8hU^ح~Ls zB ';gjA/'W½/VZВؒ<{N۟Q6  \hOAy@4|KIy.U,B֕ e$15y q]Gj-r~q[7t|'y{kFwW4kͮgK0ܬ`|1~ygVq7= wP,x _[)<oofD)DCă {k+f8=ӘCpAKYѫKAo IɲStKw<"'P0YdI|!SՊO/I!R6XQ ⨱SV绿 r{c㗻Uq?pbz]e. foK34:n`4S]T9&ĵ$q[⌹aHV]ue.RvMno ͠.34eCKRv~pһfht\h  ڵ~r|b;cz-~=.||CkHdxayָ"ի$VÆ;}`1dmbRCfljʢ{VY/85g8ZdRVܸF,vD9tzrV^"3WT7BL w]c%7hZsd'&'&PE^C0/5+R {=o_b#' 0-z0 E "ׯ4Bw]?l^ZHx_#3yņ+gqϏaz|alj)幱ИXQn<Ԡ)>o1dQYB _T.*s*lnVL#  52F|%%7Әln/ehؑaud)Mkz\jfh,6f+VEϰ)T\ɬB.*_34- >qҒ\Z_O*sy5~8Gog[Gt_}o^fS23~U` xϰ;CaNEɯ"12C\r^.gl|_ѕ2u9R3#o3X3nuz~F&{@7Ȱ~*ebSCI 97( s7L UƔQSW+ {wك'KK.~N*_Kc6K4CL "ǎ0wC֖y a)Wx-F0J03^)҂0ܥRAW@l=R4FTxAQ=c#;ŏ)G ݿLZ ~0lVxzTw͊5eF#`˳[)5CO7'`m//D7lx n8BzG4ڞqgdC`!!Bdj(|A`Bh?vM]t w>+&X]&ş vC(y(B]q2E;Q:5dV8b ^P:?J?q$ɷc>k &!✰SywOcJ(Ӧ@KYΛ/U$Bfa=/e^qv~U皂KUYmF"GDhےfhXQ圯Z>`}GOvO7]Jg3t7ʞZQHgyx*i81Rp!ʼB ;j hvv~fbmrd$T:Eqv6k6T݇PBGS`y_hW[| }'wD,>jp5ff1@ּ&Bܿ-ݲl4lg۸ce ҏ/=8n#W=a$E]. a7AHukm;{fGJvnQ$c*XEև`M*KgN?9k7֠: K`h&gǍwRM^z9%{AaKgDd`jR8so@X14J(L+=%mG7 ~-G ѿ5@%w,.|㣜cs>QnRjԩ\a *6DdƵو3#ݗUY_Cem.91S]iv-Gɐ"tI%XjIL:; Z6 5Ƞ-p( @fUR4p1s ۶Fok2c7bWOOd}ƼYK :85@G- `hj@3)R[?6vĘ+A3%#6_lFZY9/u%i!+TF"rm{mi,~-eٯ rP-3F9.X#y'Vh|#\c_WJdq§-FMS՗t%'*ґF1bںeu>@8P}KIޤetuSoq˴7ZHя]sGl?qE.PWjvͰ/ux{~,m紝i~2@z(3nS:1%M)'hق#V<31 ;id{aMy&]@1.䙀Z]{&3r8 uѱfdtNJ5pt6X×o\mtƅМP+<ryŻ^Ҙq98 ~-sGEw/qU@:cƜ h^ez'"fX<هN e3Ih$u婴ǚdO3Z@4fȉ]<\Q}CR3Gr{]vQy,A mz1nlk>7{?_wyV!DzuqLoWf]zg{T n9Y2{EջUz#3mSN$Ag(9(G`cV^' c /t)>olKj~z}^UyEP9~ەCk@jC?) cy\nRq -uYTUYSGC1&("??m-g󔢘 qC9 s3)I5'BK/$NwϝkmO/^]bs*g\ov\!J̥?`?}x_ΛP raZ Cy1~(? -ww@YqcnD'O4 8x-[j͏.-Bfq ,Y͚/ Xg&G -0Iw8P+J5α;v`pzSs9LNjd2B xp-C.d7sZ:W ɨw|LnSRت7N2|zK ՟0&џ< PpO`6;@o3@m1Dma}RǾLӾEb,-MP&Q-y0lw sCf3)sg'],s~=\S vā$#gQshd1YQDb- & [ 0hԩ6ՠ؎!C9n3bc@tjhp9a AF"  9!de*@O$P> ~6K]p>8 <$oddaXl5RϜG6qNLD4NHDn3_Sp[Ahr VxVOc8C:BxVg_^71h$SUy|ںo^RlNnѼخ3`AL fzSEtܤ} /i1~6E @ g7!$`~?)N:F_vԮvj"7d Q$?ZAk7sNdwMil8T6'3SY.Tx$ad F9[Xb '=<.mNU _둾QL!q枺krB`S]SO0Aq.cH 'őqLQYʐ)A:cRı3dlWVvZ@CDp%&L[j6:+;{0r$(eArVyj;8} ሦ3HW9JכWpc00 ~B]?t. S л\}@.\$KR)_OYݘHࡋ8,I%NqĻ3uq|?53z*twzXo*Bo^!zPi BaW(e:r&E." tyØ-lb3M%`$X%đBU̫ b!v M&_Vo.-2J}nřn4 @":L7wKk:`k1QGLLHF}) ɔ{-*%ػm-W.Zf&R ؽ]m6HMخ%$S䷱jLJn[ Hl|C~w}':{x=\ xtSkY2 PZQjsö]M_ƾ+cM{5åht}^a1O> 7cThi3 T_WS3K\z?"y9U2ϝI/./ټNںvi9SV{gݛ#[r0M|l݋l@n5.qgLMOI'Sk^%z ~ip2M(avWCSKu &aIIg-,v׋[a32; 5NdЬIԷ~LΜ[oUu?pJZ-sǁI'^_\$M篛’;NK7)VˎtĻĕcM`aS9pp|3|Ez. IkolNk-=r_r8kY`_ۻIaN~=zI)0xC#$G9q5\eQ% G 7Y5TvY5>;/`Ў,PPP#-݉tJkUkҹhoPѵ]7l |}Qn6-&\4pxY&ӗ϶]OVA}4v6&^4iU\-G7-%=ŭZ*?v&M]_(< f''nUfgU͛]xf>?/{t J67ŶQr?]-]p.lniCj>;wzڗR\FbA~Sә$M]mG'vܾVW{w9/莿};MSqE$/(Җ㮅*TS]6/;Z0ZjNtAilpKowL] >2 lBӶ"L6-'I,xNO.w6"[MM 6 G(ggI m`;!yktW3]vm~@=/Cq3m7lHJa֟U-6nWדz=Y|׎YN.y{{-Wmq~E|M%‚A(l.a\mVX-ɝqkדc[_OZlqv]\<<ɀ@Oۯm}Wi;&L4ܐ0LOhY2/`(Kk6|N]mcz_v8{D^6]\%?̫T$T?ך]BQ7_կ \G.WgqGK[:?cl#8+ J}(g pz!놻,;Hh{IC#@ݼ\uP`jEPTM0JT ^GN_U>nypp}.*84|^ޠEeעzKtЈ Npmnz_r68q M/XI 70b&ec?Sd=i]F*U!v₃B.l)Vp]?2.$|iu篨.}vЈ v.YnHraA'0}\~v}(Pʫq 1 ecz y ۘ S=-h 1C¥pzLI0hVV68%Rz|we&R+Kp)wHRDyjSrGWpĔw'dH2L)q$za 0yc0+ JP!)di &G>aQ,Z3 %wI U-Όo{K$, SuA#28*<- s9W(7B*Ϥuf  Ow=^;hD'‘A݇>Kp "OS&B\l=[w-z`y@dd0n{3"W jd ę(WEJ B8AA#28Lb xU2|md$a֩OO ]uՌ>^od^'~ewAr'r,t0)?Kgbgi2w\ W`" 4 0 xalA6/ -ˡ#RJiD'e:m{ͅ fENp:bVRjA '{A?ȫa><:hDPnGOyq(*r5[o/A#28i^z1ՠ6С* `Pk%R`E%O.{Ndu.x]tԍZ) qyd822N hT1ӽjdxIFK S@tΌI]-` s cPjiSbPq,f¬!6gfLUpV(I =62 $ &I_ Bz !EI 0+>1(z̟C[?xv 2]ƅ3!w;==2 i: Cd2^5hr 1QRuXN ɑ*\4S yVK#28jHx}aȠ=0HH IU$a+xi^qh!3HX`)5ͮ0a.=,sNq4"Wx:hD'5A4-fϳ\,1 rK#!yBJsW^Jqł&U,=Z_r~pVGwu z.6A#*8pcަ+d"'G]FC{%waXzyd8đ-jZ77)'V UdQ E^tKtgKB^6Ȩ1!WfOG@D(>= xUAj;+\SD^7.)RazC‡= 4"ƒS:hy0V2zֽ+$"CC·fypqqx^c vѫϵ##FW!Ea4"⳴W!^sWmJ]ᇛ]s:ZS'^Nv\"LY@/Ry^d2JT?C4/(_qv9ofdS=CsfɭTPWUqΐe`g:82q1t\xt./h0)sdp{0oA$W1 PLU2>E 2,4IfR|:)&}rk#!i~l Pu]5vWȤ^#c O^9Cg,WEerVdn⼜RG|Jd"$̮}z A#28}}g/vc<:yWK4"CBt wM{Cil@+yQJUT5D(]V{sA#28 Q4 ^N## F)bU ;EcrFY 8F\Q.\x> .V~F9&9eng #@f?.ma4y!K ur'_|Y+fE^aw̿9X >NalU|#m|55ɵ^$ǃ5}=W5lF/sWUyM.//@, 'Um֜5L&CЉW}4qW> ]놹M7lvoLm l4R@704\^bZ^%(__,(̺%Pu [Wavh/~oodY=X"QI-x+ dQQVQ/ KtE֟.R9a^%$P\J`ීۓiB1ʒ/󛯒_LT*!4YGcb|[v^%Ǫо^89pԵqCz0ɡY {>,0?n˯ӦAGR,t1eWA$ Y&i3 XMu+{O [wR/‡v?x5v6;SCy0]]OfmS\|Վ"k¤mu\^4ոktZD/R4&4+%89ǛzK.,C]Lms@)\׫Cmz^V(綔c1y9g^nbj>nWͫq'mIvbdW"yHv=x^~|t~`4 s #BqDJM[V +zI>=n9ɫq2ر;dkn;tUa<[T ~fFcع-1xNڍQQ'AnR>oc JG:,v<1-$91੡z~cgKxlP1Y&z^7w.~]3]kO|p:s1`+GTUuJ^;Sʃsگz>c<=3e}E{_qK̄U}|ȶljw#N[y|+z+Vh{Y9m[)}:Ft$ &}Vd{h}7N2GQE>LԸYPӃ +)`<.m8e75 3iT7 ]Z$R1 7ڹd9Xkg)\z-BheRM71gY!ZOz˷/tEH#/MJS/G{\)wr' K .xCq3QY'Ż̜{)vVp/ &qVEf{;A|8wLߛiz7oz/0Iax ?"g8ڕYּ_ ;ah?If'?|{hq57װL_GS3e+?qͶn{lag1k0Ώv@CSLL̨o >,YoB:_ͰS, fߤBTU1X@ׅݢm(XG5$x7{~,]N2pfIB:7}e*$fQo[s|bW2xb?Kw A(c¤L\]뜓8y̹H)NIBR*iHXbx|g~~ѝ 2ySbF([> YY0FwSx0ֲ͝+Fñ;dɢwvM Me@̣h@doy@e}ɛ_GG/ɩrخ ZRs'8gE9YAMWZb ̙r0my/%g=.ǃUpN.*:FE:gեg;@Mչ7A[OaIaNaR /Az£\MF;r]8`?]ݰ II;~x{"ݡa,5\*,_Gf?6NB n:Zt0r {%\-*Ptz_\_]*/1%Zrv[ W a4Mբn.}/yEGd%y/ 3icI~y|dZ )zdZvGQ%z |_&#lA[{UZ藀b\\ XRDNDL, lrj^nrC o?QNc!pfl i֠غ;'4='9% '.Σ5ρr\mFݚkв&MHP Zv1bL-m<t小y۝[y⒧!cSi`QPޕč۱JB_/V|輻 *v[`'s_Ķ3"p93bw?1mȐ?vzCzz<$XIP*5hk;31M Ӛ0R`R2+ Bh6#,N,ٻIZ.6tpq8(}VB>5啿 bP]p'f=9-LH'z&Bbk01 2]j1V)A2o>C`\S<3 )kn׸="U4e*C"f]nIY"X0i7#rO+s,=׉-ubHP8Gc!d_!Foڱt :IHN!yƏ#/{5֊ZX1ƵO>Tnńԑ'qa5A\Kbf" 15Ͻ~ī!n::pm OH#]͙@\PqJrػ; ic.S_PZ#$H.qMfy2kʿ6Z0p##MWB ;qJE?"*{̟r+oi6xh'HGuA8cѽ`=0o0,ZץLbҨbF&ُxd7.uAc)!r*ڹtaMaH._{x ZjWqN ٬L7aJL~N'F]kp)0%H߫N!դ˰}awJ, [) -A '؝䜱6%VxJxΜk"q1eIb S$Sk!9KpmHhXe,Ky8"!2$*9q59ȏK<5?.;M+M;ܮ?j%_mIv~OW+: &*ۆj(ky:#JY~<rkg3woa_$ǟW?**u*jtTgij(.'ꝏbU; 8N+N$l:TB2%܅yT+~n"0A1Z& nef]|وW*;+4#32HQ $iZai6fm,2l~UXB-ڿX0kd(§L??G&"8&zPYڟTgf'2[NUVlxcGSWG8ТNXn-4ueGo/D)N2|t>|P V',vQi Y^m {&>{ctNTd56TzD|ɡXƇl*[81FBC J(ܻj@5.xr9,udha[㍩/\v<6.iéq|ΉÛOmF i'x'\8[ =|7ߵrՙ󀫳3-mtLZ.8{\)*BpDXKԙOh4ތlrqjTc"00z0XTtD ɑ!;:'K|cuNNyPt>A WEMMZ&Fa!&d'A j'4'}~*s0$#3$2A:;m+a爧XD$T$4VaJ)X>_\;CzpؘKMH3#d+g,hbY3YyNY{VqRlζ]'c6GG֧ݪn{V?T(3U=4mAb"}ZXF>6o9A|Ь!\u`&>EL v>emd6O#XH,GΠp.[֗|Ezy9y^I5@lp8yfI5{BĜ{ wn 7ƣ*h`S$#8 NqY a,$m}ܥƹ8zSuiUNcĸVB&6)ak6BD&U鰰,cSP"b,cЪFcyʔ< NrCh4UXDSt4mt4m赛1Ux6)2=W7N8UpTMUTepaN*a3'TR1y[3X*U8 +<& 5馍-|!kKq1;6vZy2Etҧn, Ȝ*SKe _߼>|eb-{(Iha?Ge 2 yƚ㏣yce+ʵߥ 7a]G_P RM+gn޽zcTFqN}s#hPhs8L _yYih2IdjoE_̤ښI1,3I3.fYX$ڷ`:x>o͞yZꃳetӅ]Q6ΆnuȒw1ݧtQ9b֜^ r;vO~/k32, BЖ6] /ՆX]ڑڇ5"=qa8t&I$RΐT)jՔQϥ!!bsSa5pHG2lK "\\lcKV; :]֦t;XSٚt(|Lq؀͎'L,E糱v,nZm 7s]G_m,s RE@ɣD $c4qnLEٞ{B%5D:jȃPܙRZ螐*Z=s:#xH)Z{V݀f 'pԚ/O~<)/ϸe|LMrRd,Ǣoa:딻)O+rpŜL ]vfXo+LVrCh ҒhB=3q\Z(X+`zw=ZHS>_G}vMqa#4|FЙL]SGYdFγ#L(!gd${2j0Nܸ\xdKCjB _ bC暴h5Aۛ5 1c6NIߢ_t![ڰ%5?[x˪gG;O?{WnZd~Xċ``<,0Hc'`[-[RٲZ`Fců.rOW7;vzW5j_tN7K:wA~_BZnq\_>_W2n ֗n4,ӵM_%M݁5f˟cxXgϦ]{8_A=KﴞjroVovz񣮖d~+,}:~{eh'ҹ@[oq SY~́okiO-=w7l,r6&fބ8 r,:Pu][ggol @Q7xe,ƫt,Zꕘr,?]x.. B%_>s ,xO^CB9p&Kּ2@ r8ۇwuNOh\.H8Z~EGDq $@ 0hzg;RY-ri =k>EY_IU UKL!˩e?CBhekhv|Q>/^=~ZBWڙ J&Β|޳Hv-Q[AdHDkE?~yoQc۱^ _JVxM9zw|g 誠YWkD $Qtrq< ~XcM!!ڠR|L&ϐ˥|4 Q *IIMg3$ep+Pٮ*%"*٢敟\TNNaHmi%[d˚&\2s$tO;xJ(J$жQ'"/_luR 3$O{s|HPC3 +6K2Pxs$o{gP-d}uZTHc:an9z}ЦPJ'VWyxTL<97ݝqHLy khSzw{V?%M 3$~IŗpמI(XZT;JڪPeȉI!a9 b9f_R>棫>?;=-GfZi<,mT6hI{. ^-VpPIG6i[&HYi>ɴ(mc J)zgŽn[?%nÓ?%nɣ[7כ(e➬G8U&s$`g:l=vB+0;9\HZ#x+zkq5IPl臂@ϐݗݝ0lt1G!eM4 #x6{D  ?Px ZT->=?GB.Bo Y%*9kkgH}3{7VaX\KS|vxM@;lI:FMx˨hط2uT*3(9.LV% *џ#k-/z"Uޡe^hLOFR2&?~^d!*E]SU>;95d'2HU#mFĘYvdNe dsE(2GB&e6.R`s DZ[M:uowL;&')J(95ld,S s$ߊA@IҬ[l:TE*-2QRgw@?VJZ4U[SE&MHޠIRb *mVDh­3GB*GV 0.g٠Tbijϑsz.RHJ8ֲ&e9H[u9KN+ BG.x.gH+`g5{V_G,PNK|H!$XSux1o^1{ ?&U3{hK|p:tV9PPsT+4Ԁ*$~-W qLn~z_rkw"-*@/J}Y~zZ[l硉v~dqWM7Pv~ѯ\#̲CuY% b(I(U=Eۡs$C s0^)j^FD #q*XΑ <od+ t5O-L<9ig2"Yg4IF~Ew$`zk"ZSHڇe%ʍLj)L*NbXcoo/='<zBa"d=PAڇg.աUoT#ba nK#`)oo<X???|;Lp>\ݫ )~=4~Re){06e2 sIGQ{p|t m3*OG뭸?:X*uQR}+ΐJ T=}g\sQN~a}^;ѣi/"v{|G{gGKS76@ Q,zn3`JLf~(oqo7J]>s'+y%/,Ltcߥ=;ecKgSH%ŃgǗᛡwEvSE֪N@Y$=e$p#vNڛj$-zjd俿ŵes,r,­>ËT8ڃ*L:gP0ѢQFtQjezm&@Igܭ?OY79kwM'kܶ= t͆1`yGW܊!'pZe%y ׄmRAbs&W8:q xfN1Z[Y\kjxM'BfyLBI.51MBTԔUN!%@$R̰zz}SE㎬"/sY0[rQwe[ 14쭦ԃ}}SʓD@bкKKT"kUĸ0"%ZRmk03aYtWOQvɄ8?ZQP(YGH -nx_Uɻ kK^Вw%15ρaNcΈϘ Za١ggk>u%﮸WP}csggvn^O]nf{]R~,m5u ?z)t;iŹn՜ehyڤs3O:j `6;_KȦr! 5`3k 9X*I x+%[ȀѺ|LʦQ([![n6( q9R+|㹩-Fsqqz^+;s|ew6>ut^o~S)PAz!ͻC^ wuOC~6;mGȷ#vێ|o@QZ~¼_/_CY*: qIUM6D; \[Hן<~Fa_v-`^=l)IRo@6|3 j"ֵx@Ntp@gZgyg4gQ:GB=󸄣'={Km[k'~LK=ГtﲤF}Iڼut3Gy5?yLj<pb#>#ii4ppppp0@`Fatߣ=jߣ<FQE{TUGQE{TUGZg$ԃ$ԣ8(N=Sԣ8(N N/N=d{e AF%0j/)|$!.9̈́k>+_I۩Jz;|lv03[L'ZV6wAc+>IcB@$+r՝(Md뀬U FRut>a .5 L֫ٝm8u|Kq$G«nl2^,4+ 4-ӳ e=# "dBt,\j O-.ՖOVUS, @zab1%/_Y N\`-zZjgY{8+I|C!Y~J\CU%F䐒l %qF]nB)(fԯ kXFI$rR@p/C0^;0u:eu_EKFI fH{hR"(D "}_"m{yAHFj{`V=Y;!YGbÂ[q gvC$92R]2ޟ<<1 س8)Y>}IY'_2zD0 =y$=L0 R GÐu)Cv[HO::<~W@P& - tH]FeJt5.e"IQ(f033>dE3+w| YߔlB%oJ{x||4?aieh\tS-BsN §1^\,yݘz=l:kY4z|p2]M>8] o#zL|iӾhK *xTVOL1#n'B I!uׁ!` чW(`b2s=%pmVkt7$A]7W:9:L a# 07piWRp֫VX5(w *@&>Tq+|v7~z&Ӌ' KKfkQ[Vmhp ?N=l 4-Q\>MEV̷ DynCv`^ܛ]aV^rv .|hw˜=k37e$b҄(!6 Uߤ%5+̫3g:28HR,p\?wTGc]DkV`;әLmzJe2+3RBx7z9ߩ@Psi,%)D /Qf֙L#+30dFjg鋫\+7Kt}hO6-M:I=BzݟAΕ-ݮv9痣&N:+'F1s-BL )&.ùm|n)g;G- ]p$TrF3ٜ3.#@}a}dtcnm:S;Kc+ի>ha> ~€鄁{!xY@} 3In(U\CRQ<>XԮl%*Kr%\И肰|^L4&[q9FflnLdkDRk0+L H#~\ Zga:'y۔vS[!&=me5/^!,bZR194% z)a~joT$eU^1Ӕ>=A;S~GM&-.)rGۖI OّtTf| 뾫fUʭ~6w-yW| ռyaތa7n%()f$*s++f_Ĵ$ݠΔMwą~(C MttL щSN 7ؑuS˺_͖d'MiV,x',(F,X_/9gtM \MrI5X"hCޝ5AѥmU 4M)V-(ڥZlԢ 'DZlGZQNVlNZJԁ)Rr* ?T{p?zYEW/e6|qg4U6YMfU捥K{ #Rcj2` J;4/X)P-udl2UcI}Bp^l-EH[WA*][߳j>d}~}} /s[7V;Dr':WiVSIvs-fdZ ֊`3bK_vkQ/N- M^-s3#i 1!Şeh1W xY)e========o-9 <۬> GxhN0|]C|%̰i1lNc"'=9R9aM:fDG/w)0Ћl/$yb\\]]op[\-WVdD(V})JGN1c3 x1F9L0O6ܑ/tze7~`Pc,ŦZyx4d:2! 2>CJ.B.˃Pb^P1Wsʵ %M!?Tp1OQ^lOrӿ>>Œ50 ?8ڳ@$wڶ_.v!@<n_nHi}&Tj7vg?`GL\7џ zi*I; nx. (twC,l8=0g)Sk7.b#)w35pō]jφ?^N-Oϩk:Kr)x%§PJIH<,!3φJӵɢx|Xp}:|s3ir`_ŬF^+u>{\ R S}qAy%)w[ʗ7䌵3dEn湳HĬo<]ۖߩdַX;?qڍu9o.چ0jpY#f2%y&ZV$R8%Z(fPmrLp"7&h0Z3J隽GZ鄒1&mCV@i" &D4rfboй6ЧtMh7/8rE}m[\n2U(9]KGs8Q3! 58O02 XHE΢EmLsIק=; 4(O#vVqH # i1DNt:#Aqz! f3(L]WLZ"$"MTR#a\1 A*!lhڰ5NM={z9CPS*bD20} *lI3jI!?@*)ɿB'~dL(dwQq̐I)) s sϐ+C4 'ܟȓ # r+BHd>]g!XffSב HitL*"˳fEwZIHFA#b*ʓH&Ƚ${>{Ix0?(낲D!2.&e.F$q" GSVQx]5jӚ s"PAɍ&j zc@%8B "4Pit3YL>+Iҍ ,d k3Wf,mZaQm/q㫂rtoˤ4{)aeI%17^7eT_/n~O XyIIOQ]xE<[TX +HIR  C"֥bZf8 ]۹n G =V|l]  N[lB(!jtt$E(XϨs'$Kv1\g}S -%}||4o/|2t4 ;L @Nǿ=5"D*> |][]Pg74y;^ON,Z=y[>8&.s=Cki_^UzNۅrZnIqyzeq;xZIU`z0rC+mT/4^ѹz8GwMNP t렮+CsTG`co!دnWX:d:t *o5G«acGNb:w08ř%wm Ov2R߇!On$b b O0MrEʶw3)q6` U cL̴yQA+ }&B6@_ p% pp¡C/7YQmfsfo>H5NIcÖ1=Pr v}-TtGLU&7=cEҠCvtz^\,ͷZYχ 9}_NW=0,p~st:RMwR/};ŚjzW\KZW,`|V d'QϪEi yJEcx fE[[9mhgiCX%K"sĽ#bֈҲƈ8B:XV Н)><h劄~%@q945|߹[ѠT` #4ϤH :͖\NU:fƲi5h5& .-H}K[k?uAr] WN|λ\K!sD@X3u_!84Wo$~gP^Vkkh6rS̻%k/w_rF+} 4S׈:3)-%]ɩ\IwZŸ t"7|^&H.c!=Dz#҇+m Ղ4 J@%_G_;Yh2s822s£WA%*doBuAhm&mHttjf R]IPm =;v%}dvn"B][5ˎmO;U.9[]\2ɳ?a/(r}B%rvkTr$ҖI/=e!g@,g?ىwsCg?3s&8XiQV=?_I AHRVa eZ30{-#cԀ褓Mf&"lECq5ak*ʒZ;48r7fSf2=j4=hR4M$>ySq\) IsbM&ׇm>L͍杛}M͝P{ۛ=}8bλSU{Ʒ4xuuumԁ|DͮE:TcxRH=k-fuAN, hSr;:vc3vu.uZ 5*UBc3W xU. \k\d@0HUozRqXtW[rÈ.S8ԕ%V5ȊT9mxb7oSt3Ru[z]QvQCTs)CT=-jw&7qϡݏE}N$$^lq>?̇:~*Se_ IQI)%3ZX^,j\oZsc̱rr3ǽ8V19 }s/)np0%sH[]F‰uX^ ˫ܭ\jڜE^r쬅U dRKJ1<tF-ɗ[uhD WQNNt`ONQ+hXS:G pyå}Bxh*8P=tH=rZ+IqG^F,YQ kLZNw4!aV 뜥5l6%[0>ILp&7Řݫw4cu @/`k U cJ#68R$#?g)Kfk`W7/u-ka3!-Rj!F+ޠk|]tռ /ja"VeXVYQk8jLᨆ4CTDri}iHa!, |6c3ڝpeg?-? GeP% S }F إjOx{Ϭ:&]Yp< Qg\&J.@pO ) J'V7ǒE*L<AI S"!&xŜ(aڮnpx3|%E,]G~T\8YwO2&mon^4,1y+8H2BJReNRkhtyy5݉;Φ J8˯l ; %-tzbW 6ϤVrŖ׬_sk9A= rkq95׌@Z d\͠+ rkR4K~GDᣛ]ՉVA ~R+\Hظ:<Wxdy>M rH# be}0z)#"b1h#2&"Zĝj9>O?E-oj5Wl5J w"h@QAstϨQ1,%! j`DL,Q9+UF26f8^Kx([mV,S,J=EKz5qt6ܯ|$8 5˙Hmׂ)(!dq,%SX~-1C٥*YO_Jڻ`wvߑvuh6}t@K˓nȵ(( Pf \0V N"'+ulQ0v OYc*,<5Gg-_Vkm8;*s>Aw{i$=A(_{n6w@Uh{UپYò^w.DU4 +ϣ V2J]8nj Ize\Fe$Ηx>(]v q ́u;"E RrHd 3b괒aG!Ll."X10E.V8yl]_EKFe2z#8MVEOJ'ImEPP|fgaT(s]6RК 9Pɍ&j (0 i:8%X؁2oo}bdatNZ ~aSC6jP {۲*".mk.2WaR TeB 7Y85'Z>Gƅ'Kиwg2U ><"4<؇b%)qjS@%Iv>D04^IA)Ԍߝ'=NC_gJ`wtb H ˨; H]VFe0TE0OCnՕ$]128R (f۴JM>Ζ?*a)e(ÒapYO /V&5pst0X~=YlhՃn'.!s='7uM[]2W ~Tnk3oBM=I{ iF7w,a#0V2 (Dpпz7yhrZ5Jý&k\E =:>Tu"o b gm&F9}Qp3U 1!4\^ %F\*ԥq!L;,IZ&/Z؈qE, 1*2P )"m1oi6sR -.b[~|3EH;:w0{11w])req"4}ǐfXFQdň@ bp*'1c2 ݙ;h/ط+dj]IY%SXd?(2hK+ z]D|MoD{k,{AY8ߘzCbֈ0F,1!"ƊL}lwF.W$+qK%nܬ>vh]cb#ʓr#=Ѡ[># ^N2jM~ L.]&=Lzvgg$3Lϟ/_&|$ AHRVa S띱Vc&ye4zl5BZ"Zˊ^3&7w=ˤ^i{sbFw۝'gKJ|a)ono|7?{F9m/! 89 &x%ǒI~[wԶZ-N.YUd*eٝ99䳗[N\~Ӵ W6[Yms{[56V?xA ZOv06;pvle-R[6kw/ðjP^h]ZModԧ5tD#Tc|ZlPd MYYJzD=35¡BaO#@HX,,7+}g];Ei1DN"XxD:(N\E* &E2(* %B#آh KȈ&*HMHQ+TT!H% Mةig9'ڢ4ZEHOxTq8f ,;`Z/"ߤw&Ф |=%{PX1E]T3*]),ȳ̭bxkPdߩ\$ dߔN{ǂ![0*ᥣnRVT;ʺTf%yhq-I.i(-"g}zN{i UЭu*{/2Y1Kub@TdhK Х6AQFEvlH%kCQ$Q,X!*CB 'YhipĹrE؄ A2j5Rc(,<a8zj!|\ +I7"a=?!+IoݦFտuz?W.OXb5gU`l{OdcXa cID.b.xxel(85iCR2Z-`Bsx9iQNBUfϑywo򅤜%ٟ[ dzjrw۔xSwd0NWr5ZASC×Hx_a-$<|RE[SٺW4Oa5_'}ܔ?x蜭3/sf_aoz> Ej'93M fU$YeOo鮪RݍfQY>!0kT*5J&ih86]mpZUJ^j lr8䯢>=Dq W+|{"me+rh gA?,> O_޽?~1Qǻ~} O0RJ,k/>s]˦]Sjt]75rKrEC|}7[sSǿإǼoӤ<1YwM8ӕ`wi9˺D?VWKe@սpĮgeoy!?( g#(a $b1ݷ \3k:Óuتt;A p Z!*"ns1B]JRLЈ20t;<"Hc:JA^s+Nq{.g89;Bi|ڀ^:Q_*rmXrh1VP.FR*Eg~ 46nz;KQ\1SLbL$ 6?/!%+lq 7G(ԫ)-2k<R+dBS_˸}Dմ dc.SY HRդ6/˴&OgVpHhIJ|*w bF1秖bc.0*jV`k_I,K#:~OMs^i]( )6k@Sn.\0ٰɞҪRUd񳖊jQ*K a e9˄bݢL&{ '`aQ>kyȖ:|bȼ 30Rb2]wacEL( ǃ kuƅ`E$8Dr%q7 8%T8D')&K)_i1g;Ja8cnc~̅5atZy/[~S1\AZyDsShcr7X~) s<`eXjP^$Fo#[1r4;RE&;Š-%/v ?uEa) 7퐀3|dG8| ~wˍ=|ڼ-=yqMKg)b̪EpdȓOTF>|vJpJP[n~nГㄞA͗*3r$tAy׹rdqxZX믪oi( y2Ecx v*JrOTc '*)b=G;(i-- ZaXc.Cjo4`S +tBJqaiIMJޘxpțFE= G/-5b.>&.; [.3#˘5"S*(0ԋ"%s/[u/=՚ eye&#QHYu2LwZ D佖豉hj`*pbw1Nboy h(W(9np7[LM39Vt'=4Sf@Քfr+kEta.7N:QUK`\L~^u ĂSh|X͎}}6"w"mӻFyx7in -CI_Mo1)e-9ބdb"aFVFL5ß6[-"ey6ѿ}rFgw՗~8LO`>Oe8{R3"YJq`bއ<.V8ŇMq؇feǴ[H8$a,V3b69&8BtdHpf<5S)ŷAJ'0\0PҠmPV# $0Yfj٣+ 5CrZCONh/*_c QkEC[GqQ}; 6p3I,j'Nejs!'Iϩ*$ YрRK)V."s{ƶ.RVRXQ6כ$D$=9憩)'GRţx:SV:Da?ca w݉`xK޲~#)'- 'lrdE׻ځϧ|#<Hj֥1VmZD4Hm1vc!tDk MbT Bb\ @B5B]B%R#ur) 1 zNE :'-uvŦs=+>mW'.o}Q/4' Mg s8Vtt&tTQc+[  ^ZZ>3Zj0R ^- j.&C)eSb4Jƭ+:[vuN!ڲVGbV^ IB*mB,)r{UJZ;X7g_"ગB eE2956]ة$?F&eH^w [mlݗ!wM0zRQo]CP95ŋйbHA‹Pw]kSuw8ѹuWm?{u75&{EExJI$61ATL=vEQۚ؋ Xc-jqb+9VO"VC!z*ၗ:?] _Q3'^m,AH..7%GKjpɃ0M"o j)6BjyL`s*Z-ZIg__k6ˎ+Fh?m%;|o'c]>|5/|n輯y^WH(A +54rg\p>%* CParc)tVUV=j_u{u8ZS { mIGzN+9Gs/v6 -da46}ݺ]KX'.VT}0l-ߎE)ڬ[Z ;_WlM0ks11>tL3_o|N5Ko_ʑ*Kmak3U])Uy] l4/oXk1tK9Q ״"ڛ1"du DK&Λ߃aD $p]XJ"Cjsg_m3[W |kQ=yDlܢ-Ź'vsc[]$ֲ7ȽC3ߕf]k%Z3+a`E]t-x)/RaUUGaUSaݕSZ!ՙ 璔jgeѧƫ;Nͽ? )PGH)o@RcUFW[S3 k/$8k vwqݿ^l$"Lt"}匀sCV# @)E/ǝf1vn?s&.vª}GT;-( LEcJQ0Hdi/֢օ$lEFXU{G2f4%D*@pjT,eAry QFI{ DJXGвJZ*iCJUKBGBS-6seg_S! 8^D1~͆~fWn䒇{=ri:C.7Żx'g/- ?hC~wGt^WW쓫w8W=|~Mf*5*Z.z_g{˪TH >P1@C.L Jq&P82Gb% zVBn+ ΐG9X;a0thK;g;NˋW#U0`O0w<*r̂UB2񌗷 SK^%N6&"Z6ߺ\u?Bֶ2xM8ZNj}rjnwЭYv^(c|$Ytv K󱹿~`FKsٔxxvd{{,x'q)-V DZ>G2Vm7EVf6wPiXI@'}}-)M5PԬ8 ''b?\GYC|);3W3ϒ9n !f'[q7W <&jжXF6Lkހ%qֱ$'LjG']&V%N)r!(\Xύ$!JʘT&H@xF/Kg.ю?rެ z8kzS|>*<T|R"nH\4 Ȑgeu( Ea>~ "l1q>[ٿEӒv?hvs$7BSHW\X"mZDi4s>wAB%I Q .&5qQm5ʧP,DdPt&Rg-ךRO 4q0;c$:'-:TȬӮnMI>6k!h`ታlZ v֗^^6=iO5)q0D0_;Hkt(,N2%>@s~A)mav qWmW:(3@1B6hflbLǔp:y%<B`HBnDJ4\r$CfD+'SZK#Ӟr-O rX2:bgiO]4 o~뱳ΨCŖt a1qq;ѲQe=Eeg?QNTv^Q0 +N4FTBNIh1#55XB胴BLNy4 \xPVnma_l:!~֡,?}M3%+Og/|1WsOY]~#D~=UvGk/z^U </$RzPQb52Yy@+ꥩ[Lrkr r1O;--RLqCetދDLԂ| Χ(i3Kq& A!gk(0tT.gIh :ENtzY+W?ށJ`\HGf6(b1D,ET 1!L)#C*K@5DWV*)M"PR/{3'-(+M95͵[,j HH01hbP'U?i-d{0"tHP9)ѦJ՚p i!"'a(8Dmt& ,q*_A8~+PQPxop3I,j'N%ZhIRs /Dt4ERʪpg"|Xe  $! $71W47L%NYwmmKz9'~9IX'ؗc}xD4Iɖ߷z8CRP$6`4{k 9)`]e C<3)9hLCkaoqnVg:AIQ6tцHIBBNE:P9J lIv74/EL7 \b҅Vpb#V_db.dS:&&VQk$<3LHbaDI&w$ոgLYkww 6<f^,\oTXU6dWbO6bP0P mspz,)!\QkD1\ %BXz &zG޽*xHodƷ\^fN^rNJoōD EDiV`0:$pNppx (WK 4`BLJ;f*K)µeBȳv55-㿩O{lPDzУIU`_ŕQe D%XN1FbdC`RH-q~$p9] ׹Kui'*#)TD4 3âҌ;$,"0 (O \(T{, *tӈ:Aƺ`ۨ"M]H04E[=F+Tz$A!BeYÇRK6oJ/ M;!Ar|# c$RAk4kDE^ Fp_{b`38B0: ^T>8x}U!Y'zv ~]Â;.][-DOͥĴ#rFה 31 xcfI1 ƅޥ=a_?g\&WʽT9ߖGcGZ &; 9]"rGWRP;7ay袺6 fE-6`M@Z~sHzJ:窠E;2|JW )I%%oJ޽j0_)) w8nVpI Ϟ |0gO`nͷv7vWcR=FZQt{Pݝ$t[wR]yeCWT増L^0pvm{nrZޕwLuse`ӫ0tcb+_su,q4AkY;r5/?sO??c>_珿@R~`CNѭ M75Ul[K+os_k|l-ڶQݷC{zÄC|^:VM t%w-? WuQˣܭ/] < P/\y>i[RkJJ XLmq/4iqv Z[V@w&>y' d:!*"nsa*#ԡD-tAșeal/vysT+!}띓'[tJfZrArCXQ=D9􋹂7 !޵.[u!0ZO?/l zqS p -Lށ I9)Kf[\4鼫+eZg?h2o&CT?HCT?z֚s0{#`qQsX !PeAd&=Izvz-1!LJO0.Z{xb0~"f:+2aI=߳>N?zO>!x |HoAJv‚8^fܵNTX}ron`rYYHLJU7JhQZ*sX q_:؎J1D-(Fb<^96$2&4B) e F6"erG (Ĩ#xpN/"|'iդ.h`XN/ K@N(E f- ,[,F{)P5xS3L}>S3L}>SQeΞ> A`\\)f S.Ȕ 2L S.Ȕ 2F>V\)û\)dʕrA\)dHkHK#i.ȔAW.Ȕ 2eX 2L ST;@*(C53wߙ;sg3wyiِ2wߙ;sg3wߙ;J;sg3wa. O V lNwtcy{4v/6%CHPHRVaCjĈ2b=6Fj ihn,S'FHMm ة( 'ru׷NG  H-)7nd}juou?*s"m=ȵCM(o7 :z0 by+Oc(uŸANm6vziBZn:>yxs=ϵ\N;x9onncpjCU:f{fʩjN?inM%od6f_g9?PNzB;lQt({ q"t@ݝ;0=ז9)pkn!p;B: χEKJr{ 3)$D6(UN-׃ĻX޵c/CRXE.(? qfSa%U)iroD5bl6p5HW.ʹcxܘjK ^Ȍ h5sf" Zl][ZK]WX=-iCWZ-qt:p}@7Qkӹ. TH'Kґ0хTM=ʾ" cS;$ rڐ!c}9AFjs=6m٘V!': dH[XA`Ir,a20,5ac5[ҲeKۧ(&; fUЧfU{瞓-l^p\rZC6h$ \TVSZx R n%cAÎII]]*Ӄo;9"!~5StzWW/0/mчK`̓Ko0. 03lW'åoF,B0#BXYL)#"b1h#2&"X:i>QNUI)v⮺/T+JdEFh0s,M,Yydc2E4>Qq0%J7bi! Q Ƒ&&-@ sQj'=!0w 9tWS9|t~DdRu5̧{_L^o ]];[ ̆ pYg V664ްM0ըWNZJ7 IKjӰm7pB ZmqɦF Gr営{rQ|s6|s]^/@H_#!hP-7,ng3(@оLlY5󐪼IQ+"ciocJդbyJc.F=Q\'4(~S#Af>qл?:UgrmOs^b^a߾q!^#9CsAUiJ=I1:)ϮsNz}fgE&;<142if˭9tn :#vADzX% z8#%JE!9T90D7ĸ|(୍1`FE+=1 獯RrTMY r,'‘/.͆zk-K- h6~j1= 1YSd5/ƎR8V0O2p@Ԧpg@d_ Xc;8m74x/2zuT,K6#S]#h]7ǒi2* U$d1+P&k/ѣS!WU[΀C1U}9ϩ7HYz:){|%u3 }W$+mȭ U_9N$r3/M= $-(HrR*d&'HJ&>'Uřcف ed J3QXa֖ AdilRF)9 qVm9q67e Zʡ@f:RbH8!L *yQeߴR66Ԅc _(zQ e!bІH XKˈ,fO`UFŔժFIOZh6FW[#( d9 +E6U 1 60R& d@tMqTQ}%v6z*Ed!*-bϱM=ogC4dE2 ' pœ#+흑F˚F״**A#Yt碡^Dv3I]9Mzz,&"+bvVG@c#=oFIНF+<5xԃNJu۱YG La n!t!˿>ތ(Y(˄lrdP`lr2q[&nrתܵ*w]rתܵ*w]rRVy5_ʫjż/+f0ZbZI,cȊc :Z*3|;CV}kg'k5Z VCl5Z VCPk5ŬG1u9GQb~?D~X#|KݩCTgL2L&*J]xR#x2k*`[t$yAR " jY6m9M @3xUzSr))anS9XX,F3YpT9sX^V[ΰf ⋤'߮.^8/nm98o<]>oK]0!-ϝ~^O]Yٜ֜/얩VxԗW׳9{3Z2u"7.mz>Ymw ރ2''=Gobe՘G6lBn!çݴf}%zwFnW7qWZLeҧˋ||ŰvRWСMyQϷ/Cqzn7?onlϾsэgj|sv~1ȟmڈ&j:u .7Guax8vۈ񎼎<MH-edPtI;nkBĞ ;3z[ {4~]+p@!-VѳOk$UBM1C~/,7[*L>&( Vځ`$Js)V7g1!D׬\( If寘<$MKc+/ךP'B 8c6;YIjrI{QDJE s}UIUTcR>՘ )`z]>:E8%L#԰oŔ7t D׽DoJ|(4,:</ϧM d,;~ @0ja홱YMhU)M 7=ЬBXD.KEsDɘr$ iSL,( l-^p>fY Ns5 000f gLi$$YiQH>`0dB :+Pk`'ܐ֐֐Ojs^EYԬM߹dJz{[fjrH?< a#xoף3Z<ը$]p'ar}wӦ.hCH@VzP?3Fl& mr2Ir%c.ˡjWІ;bjG.ZH1%; ,_-77RxW:Y$JUwdU{|7i'gSbTvt6_;z2r4)ttU5.Tn7CfZ:p T|+&8}|2_F3֜-]Z>i y-Nn껟g CLNӼ஥\/5 ߖP^QZXאϺY;` \Ę0ƗggÈ=.tW6˛7L?xi函M>\™@!*` 2ce̅U¡H@&2q.3[-lyR|~syŭGF>xU]GgGX܌9_,sG54%nVy )d S) b<+=> Z4bQxz /n| gڃAqjx SH^5N1$Z6 \~Jֶ+^ zQrǤBMVi}JͳMh7X߯o)tte{ӥ\Mο, \ ˔Ǟ Tm@j2w1cIdQDZ6r'N]tXE`'}ʾ6')n6=ᗇa m7 łB*d;VL̫.xpY̡fpܕ5l^Zgc;6qyJL*p- 1PIƲt.ٌLItuZPi2Z#rR^O[ƬZ$ؑDNiH?U.Wm9 =LNyN BCG6eX;3yk*v3 }엗KÎT0^>>Q1aM5:2Pb kJ3c!B9CBl e,1;IV4(ې[ _9N$!Rɳ@҂^P$-,XAff"Iox)-@(D ˚ T"Kd6JYj-~d%@f:RbH8!L *yQeߴR66Ԅc _(zQ c!bІH PRpSʈ,fO`UFŔժFIOZhibV YNƒJMU6Hn  +,*/)]SUG7]iMPM&vlM*&`| 1ӐYɀ@bOwF ,k \H=dwzZ$u%XWMDvV.72Fz,,;-1DW X3Fx`%5!k;9$dWt?bfZ kpҲ4 Y%>~lٖ^3"QN -@&Me^rrB'Ǡ:fLΘɎBF&c97f<(ϾݪmlwJ!ZH"0x 'c8>Xf#' Z&B QJuLykEuDE*A6G*ٶe ׫e斆aUŴ^2x~R{+$hql@Hi%qorhL:АfQULtKf;/:ƒÒؾJ8?m`RbϨ[^C]jiM Z)Dgyo#əK Zƕk| /u/PA?*ZV?׿ϋwǙiTbV|̯dbENFM>zzУϟ_/zReg%i籠ymJܾۤtZ}'s2ǷV"}'V6-omw ߨށ.՛(yztEw d*hR.qj; 9w8m%$q- m_s ?ֶOxDR1@S0_nMv*$y,/2f3'jZg82>^ۻ1c%/?L ߷zfx1",d)%3& D#3j$v O6D&frO$d>ȵffv*:R")9IEdy=MafXTqXFI$d>UK=#0~*kAD"`٤ň3NSUShIEGR["T߉q=tD8"X [9!i'q?_-] Xޛlra!%ֶ.5C77(,> GLvpp=bf?4*gemnu:u+Z0Ց0hRWد >٨: YcUnTLZUV3׿|/ߟy7˳7gWuvKuvA݅gn5W?ܡiV޴iX%js7i95VPYCwT Je?\}~9K/7%uw$b1ۄ_hSqv Z[pVu}G6Xhj:!*"nsSw0~r&b(De[gVvZ']wx6Adaz&-pkbPgKQѰ|jp3씉a)SAVO)fr%\Иʵ!,(o1tU"2[tETqZ㴽]*c?E7RPք\`bf tFٵ{?Z X\-ն. /I>|ޅj6#yZQV Ҷ_pe|!GZ-aQ,Oӏʕ;ZrAhDm41FY՛qdu2'&I@g:+,~UvoQRy:)ٷ3TqV@zٍ`ƯG}tAKpVF* ?z٭f+g3r9֫ӷ32? )]mǭ~u#~4qTyUi2'G%tIͤK,E3/[o>N*7-l#-G݌o9VIfMlf3Ւd2WL ^?hVOUrXo2MBޅjMZ/Rr®!J^) ݣvaZj-~ /OPkXPVQK4t4ɡ[8<&UQcƼUc"%SqIuv }PL–ԗFp!ۃI3xх/Yj3f0 ` ^BYy LTuWymeioT}iB?V7/ʼӬJOUyʪFOjJ*s d~|[WS4R:^485lG`5o1?{YaeB=¸Ķ! z*Ye٭AZfȍM0D ~[Blli Soҷ_/Ce%t.>]?<.~iatm~=K,߳E[WQN#Nh8H (xAaLDŽJZȂ 9όRFsZPիQ_El߆zYu)c ZfRY-~Ɨ7n~72*|U7~褺 $Tu)rU1nQ@Y}}٢)ծ5)/(CM$mApF!G•Gք k" FQn(Rb,wxkE  b) dSSf.gPiڷ\ @bOo9[h9zs-rNşGo_Md7>,fb㽧=&7kX1.:}>SpG+ wpG+ w܍pr wwpG+ wpG+ wpG+ wpG+ wpG+ wpG+VߑV9LZ#[aZ('+??ݶ"g4`+$T 4s AN4a|2YG Ҁ64Li:kiYH)j穬ʙ{df|֘tSd1$-DWg,Ņp YOZIX2@{&CDt))%n=K9 Q ƑaRk&G+><I- [ sZD)3ln%9 ,ӝWW˫aհ{j,&m-E2%шc'Ex46zm|ՑU`D98M(rneZGEdQ0A[R.bH!mֺ-[DŽv ƁHHVWv=#A ?7<֡'傤CA K}Quxaq"@$@"΁.& ?k/@'**b=G;(82bҲƈ NXB7, %TVlEo}HB:x## b]Enb>Dk7IyÇ6"i-7H\@#C

2e64 m:՚ńGtyUzs0bK2[XavlBJ浴6vުy^x{5`JnMn3s c+Es OTM8])sİ4 לf9hTk2O&.Y^?>6-yuH. ->o0w_ n.;mNNoϙ>-݂fc0a!d/,|^LUr?oF $(w=lK|$_zyQkfn\ w'7K qu7~?r >1{#Po&X>0r$4s.sҖ;^=ɎIOt_T"gVy ^EҪeJF#&L(#rK%bdFEJ ƜFmZw[(Alak-d-BƒV\sz0^\n2(O~U 7Ӡ_Ln%R{53U,2l %^FĐґ0хTmdK/SDmhX 88f \&Ge9AQNJ:#ȁ1mhحec=! #$3Q Sb%#1))DXҠ NLa. [s iZiRӾh ;R E'߭dV0[<`x֭6= _nm^mXzWUdS{Ұ8/u;m ezv!¬JI˭[M5w<]iF}ߔ+\\f>׷}w:MOsg"fn|bfyکK𞰛F)1)̪teس;ִqo) u&:iCri͆禐W@y1(j8< ~ry/E'ʖ{>N!_9n&q9~qT3JCq0THH,qKr*8Y9OE4qj)YOCMqW$-Re5xP11e"ՉHwJzcqA B6D [ReK唗Z)}VHI9E !1"H+-\29;CzD|xH'ڗn 2]_M٭@K(]|SLxQ\ K+Ap)=15Fб`>0JF&<1D$ YVKJn1ݐ%ȣ<\ ?9-$7TFHD-.=N\h+ƙ4M4ʺ\sVRp=KBIǨ.rb /,gQjl9+aS]ElPD c4X€@L:SP yP oJJZ()#wRO<Ҕs^ʵT0%3T !IhFƠeZŠZ?{ܥڕ!8#=z`( @ZND*Tk6pTrТr"mGѴuRx⁥I?oGiHbQs<9p*ihB+N"S-8U2HdAGZtK-rj/hc _:d^|K5*p&& l u>iaDa*AprljjgQN<`:㐇J(,{ ^a\o[w'һ9d&(ل_ ~2tJh^%e|%$ީ2V ZDmu97IxPQdq]8ZC.GӬ`PWPPWP,DdH\kJ=!&DωOZjS / f_={W{X\ 3Z2bkwmFiEd4y4A ήJ, 8 Q*5"Ve)/~؎/* -jaVXg-Z_—3Q TTFiʊCKQ`f&oV>6^ПoWvUWH O*#BkӸ,w%=m$8IPt2%=A1ܪD{1EB0 @,jl͘p{&ޝ)?<-gX|rköi)js_z,A(zYd Eu2 ;߾}ՙ~|U{kJ;O7m2ch4L.ht{@N7FӸ^I8}Q5kwltxH#H7B$bEP/8FbW:ʗ>ӓ7j4\vOêWnP- |?0khpOh(}mrv&y&!,J>r z\ǯ<G-TZrt8n*HTr8fao]Ic6Ŝm]ECqgEo$eF[I,DJ a&z7h'B3z2F i"0 )t-RN[1|^Wy*4ʗ!Q D̙%\t>z'#= CH/:~P4,o3%#SJAi㢚 oTDD%YHUcP~n߯&$cBڔJD$Xo81<9<e0bh ?ev$AUV"xNI^*DhKIJXr!jY9)%ZA*+H /y-VX)l2uar\&@=LqΡU+!^߹ڬ Z;?{9BN4P u ю4\޸8 QLqQtfugE) xh}utWwC.+u$J>tVMi P7 !!99Ss]'/ObTkZh9+yDg#1'JW|An&3~T~X/b9םlpe\(N'[S*|܀9$]8|FOSZܼQtMH}RȿйL;Z9S}|epv|,_랝Oז] !ԹomiIAL t4 k4O,QCj+W1[Aw=_͜d8+#g?dӬM (7iHX c0y1kv}=6T J?u#;_|O?o^ $2Hy4 |1kpg}=VSs ;LHɧ-.0]FUm/lk-@zu?}n}=/Z%-9҉ysC'F1gCTc*E\P/CB4b𻼷±}DOIþsGIz<#W%hH9&@"2fNio2f}9H"!!(S*+9g6˝Ew";Co/iTC[ķZeP쭪"kmȿ1JHIqe{.B/ x5h[,x#+5W9XE~ujطW{j6y0Z0$ؠ^9G" !xe] $;?JPxQ`ąsa=76pI@e$gSG'Dl 3%\#g;0ZB/t}#m|j9܂ɆRc\=7*[9O` }}COϋM^+sYᑒ*`*4"," hA,~FDth!AJQ$ " C[V?I9.=xcQE0|P!MF[lH>`,I6ho7EE]^Q4NS#j_]!} Dy &g rLvyin~ _Ǻ#m{9^ox28n]a@$#㍯ǓxYeb˪̎nzfwz0c>׫K:3Iu?m:z'H);Sxwwϟﮦwo?2wmrv1E@2""8@ݧ`GFH[l],˖Ԗ(K0@ƶf+~U]9[*Ul3o,G7#sG[C ï9OC[}r# l05 Wᤑ,}0%Z Ҿ1F(fx@nόMdVMv޽p8PqW}- mo Σ O*w+T"(O,W|XW߷FmpzM2`^kVi[HFH>6_ͻP} gPbpd$aDt! )sK@Y.hp&Xf(r@ d|Is][ypS:et28]Ŧj-w^;bdbp1>zVEw]J8#tQPM *qάx|Ad,n ZFKD^{-UHƱ`c) b-R؄Jf,6q3K9.,&BQX  エ5Ӄ/?kOOh`04On^.y8lY*ܹ\e5ALP -1MbDa WMl9kꁗĉJ ,I4Ӓ 4P&Q!B$Xd˙Bx/+*^ƌ@Q Nz*ʃ,e "Pţ8*9/J`l C"vP< Ǎ{O{Qa@wGKm2n3RqZ E @º@l!:Rjj́`ַ `tմP~ef0ڸY\GayYoޅSok|{1/nls<`J$bTmBn`(3`j<&b۬Z -߯o)\su|s? 9;S,PzS"9xg#搘TLfU8_:ִd)K6<줲]rCi]!@y1,jR8'|._{rQv#>y?CFD yqX( N@%V V78hs8Q⇌&+ Y=Clkv*&L:N8f@S`KGAHF)lTْ9D9A+%> 9Бh\bS, CIũ-\2t:CzBuoӉK?ŵMD=[򥑗;ݎ\bC|[bvMq1PbWZƒBhkh3a|`XBybV(JE=b|8<͵Q(-QRLqC!:e"&j|6tBk(r? g`nшu-rYEX02,I-N:F\^gŦg-~5G%<:UET̤0HQq- QgD*URϣjHh)*JB)ɿCg>'(}屈+M9k]JF50Ce$$ Fn  !JQ-3vR=RJBAP;'9SxI&δ wTHi(8ֽ:J)b2c)bR,;S 6I,j2O.8-j'Iϩ*\~"hRK)h+ݎcALb"+ ':]X,Q֨67IfI{N# SI,wx!9r]  c'1Y[tN`}F~;ɖޝ-a'lr`*e`qDo0Ǜ07E_w Þ|*Uo n(ڗi]4HGQ4I6h"~'q>{kHMN(eڽ uUr3rqo[j-+#\!o9Zo[qn#ǰXWql>eK7-DBu+,U_?XfFc1eF*))nR(Djո%8Rh8ͭ I%rqgv+F&qhsAp=ZQ.H& 5q%T[-*ŻH\kJ=kCL$]TH-ΘDIC0Q"NSKMCtB/-)ؐQ\α%swE'&g]3Cf>O2d]lHΥTIh5/d`FNqqMg3G#4WO*{"4N:$&sTԕCRɡL<4Zڥ~9K6PΡ' 4:zJ 3A.*%*EA C(HnV<''I{Wv5Y̯[v;&b/zNOw-n!^z>+u~l~9}yizP➖-e/n;^8|!Ko򦢲aϖm`w+>8 }{5%FrDlP!\śs0D8w]Dιkpؐ\Zæ^/Ho=+x_@ DIDoؽWa? c]K=Ͷ2#1-hi͢efBx܌%^$jg$Y#8 QR4^kfe4LXYI ]SYp==_UF*j+ZʣVxԄO3QkJDrNþ[<~4lZLy8jDJy#I)5V[e$Q(q5@8Ž;8ŽPa  < nU" $p/dS \-%SLDEߌ)\Ãu>tܗ yƆ*&[汊@{{nݻKc(a-D/P)ڱ D48g^ gmpjPF.,^Hz&94LhRI"'Mg;Y/&K^;[| %-ykl`WTHLpMNaՄ< )?eC}%9P[d Tk%rG2P鍧H4Dt mhk\V@+,/O%#͖hmʸo__%H?^Trf1ɦMa6CaJRfzLXllT)yWw??z; nyH=^Ev^~Ehg{7CgeZqJĠ2{Z?ǯ3]^j dbg7kZs63䷙! 2C QCC(I& *)g$@>.Eڑ\͛91?|7 /_[Ԥ>Q}/tٛ=ADU-j0(ݴlߓ]%wl-Yze[Kw3of͗܂̀!{D^J~Ixƣ\q٠pIkM8a }CDjt߳i#2{Ͽ_]_e'w2Hk4?HvV93t⧄&M Lhsp yH0}Vu֫/Y=`i6[qSvz+.JXniCG1^<߿ "\M~#4 LeIS`)Z>t&փŚroD{b.4iė!QKdLD I, E*Uԣ<6TnRNrw g "SJrdJ3%JJSaEFHf) -!JۊS TvmU6 t< *X bR R) '&8ϓS^iQAԄqCWi $aTY@F @AyxQIf#1Rr,ֹ]7=h ɵp൵ػFdW}ݣSHЧwH;/{/j%,͸ןr Bj\IVTV\.Y{R@ q9RQ-bvi6/ϯEI.DS Gqja}Eo.GrD sw d yP9S~}qe~=5$~^%)!֑lj:Xq&s 5  NI99ӊsÉk l^o'R@x*oh\de.u[bu9qb[@3 \F+o|&D Wkθ 9$4 o9ƴob ݠ8GoPjVJ/۟Ujt i4<;-k(BVrs?bqG0 NZtIw 뾍2|AA04:*dtq2rxf}ޕw?d]ULߵ{0|c4oj_i#&M=6T 抰7l8]9/~~߿|ӷO?~F>':w:I/&_/"@d ?[Ro͍֚|vw//},>( ο~8}X ~|yrywrݟto5g)O8OlENPU4jAU 1!M>ƃ>KH(|`QЎ(bD6$҃!%-ZΓԛQz\U|?\#B>,zoןBL1H'|T?WD0 O$\8 /(;E6vjz!vf=5P'ΡG;3xdy%;GN\y۴W9LaR\=SH#YJf/f\W_x M'  !>Eu69q֧ȋ+ڸ(-4jzԕ Dsy}K3Է?@MKclqE2֔8/rmF$T${ȫ_У~;A OM$2f8,a1PBRt {F( D ]=x2*[]%=.2+J*bbZiYANEH`R1Jx*c 'A$b_>I{yKAJr:27ش !%y:\4|1M\EpJ&fZ>>!Db:Q"u; =Se1rvLP{|=kr:xnKh_m(6PuRY^aB$#o*VpKޚ̙ApkfLZݧ\]7s%K^ۙȍչ]]] 6OgH )c(0ptyt5yz\qeŘ-;~j6 GV->}4퓷wyif~%_ݜw̻dv \P; 5\f7+.XsETǼv[Fvh /yno~ߜ1gz蛻8sݨ1JSu%Y͖h /ڄ:*G̺Իe:Kx\%LQ@>2za.Xe`ݪ0k7_Zh6xՠޢ"ϋOt=r4Q=.$~oKNP!FH`8o0B r v\J/"#Ș` 9E) ڲ|ڞ*dd8,i)cF#iJ^F51p9EΦMZ61@)'< @9ho*V]29#"d@zQ:qΜ JF2%kd()Ƅʘ8BcEѤ$\T;Jp TiXq3J9.,Յ..ܫ.|iu}#I]1^աm~g*7M~ _4vDHÓ@ kpsi.h\露DF#&&,RD6%|`˛vz=0Q K]!T8h$((r 0|L2hJHأe/c#rIRpSS`) > Lęй6X %^)ioZ va^S^6eE0J V'SisǴHUR)R60VUE\b*ςS W#IYة,Y|.,kFޅj:㮑G?BiCV}8#b\;ƼE>\ &LGp 傩DU p&q.[,f}i>gw8Y)W~eڪnf<V<ߌ:BfIrQ "qb42b0\3ęx=b,9A]`%&^@VU~˷5p[Iy-c7EdY{ 5xm5ު{<lw/k$^0'+u/ruz`╅bcK, /?4L,*av)FԓLji< ؝Bֶ2z<^hZJtϫQ:b%Zf9S^GnpEWH?pq|,o)\pt:M{~9\M=:hzS;d,-&1q7mg)1M*]<ݎ5SN!_9q&q9~qi)#{k!ड़~$$+C8^y\(a+cIf쵚n&^(A(]B  hj-!<)H"s0X-E7q`rcԉ;U B9T^a|x|:R&X|mB$Z+$UTb{k˃tWͽ)[Dx[N:O$vx,C^`{w(vּ́-%3{oOW?:(gFΠ#<Wy%04}9CFB1" .CkFQ2IՌk)~fRHXJ @eS7@ :-Qy "$N"0/]9;`i[?f |ޔ畏H5%W|<5?Vz|Mp4_G [EƱGGBGVc#5VHHHnceCC| sԝ6>Aq!Z#j ~-x? i $ @4Z+YeV0ǭ#$% I9I%ȵ." JA!D*yesyKe(AXJ / 9;@zD0!.: ڔS.ͼ4i‰/ņnnr_c/ObB:%RdRh4RXg OiIA^rIo\{9;'+c{[ %HqKdB'Hvf kƙM4ʺ&, f-*$R8qb(g\}b6L+N]u sQ B͍B"&D0TA T>V^J1~z=}`OBA#EІr.E04o;*5Y* 1)(p !P ףOwN-Mު -'ʃJMU6Hj h83RU<$IzQDqts8H#b.XW $w\}h640nN"z$ţx,Q.!ϕYX:G ٰ7nӃCfȢyۃj{a]ڙ2OHMl‰QySN_ %(n#Vs[9뙓L (Ekrzœ|r9 D1XɆ'ov=>D"K:t@XҎYltLxcIj,#k ǘ\&2F|${c@LZGmiDQ"g >?Pӎ '7(_:Q&ż/'\t>禠CB4c2yӶ,BUV"czѾ4q ֒ aɥ6T89`Fg%3~({<6 {L*A?TRԂMCJD ,'6z"W'<mC{?wsk]IFgi>7݌]ٕ_ o>/r3{rs+"$oY/>\pqW i%+x|eX2yUbpT(<|YAO_nUپ&VmW@_u3JGSNP~^3RN7lRL*&M.|O_>_Ϗ??~_>~L?>=j? LGpָ-Y9Kߞ*4XkciEJYurڠnOn'>[NZ~]Y\C/,F6g<7w]P0 %;UViM<-=n=J[H!eLvÁpQ{eCytg(a!Zz->$lD(+%=!`RMqp@{SNb>ΫWڸbNĈ+U vһr]\zW.+ޕKʥw߀}#~riIex.h`(. X5[,x#+5W9XEtjcKY5V UDY̓r9& O%9Y+R 1VMŋ#.e 빱1$˵M*$ <#:j>!bS)pϥQFRM@i-my$#Rt}} E?;QYǕ`}N}*C0$td2_u#0:y"ɕ0+\j,Jx҈F8*Tବ9nǂrxZ/EZ/r\z:CKYl l2!!9NA{L*Fi&rrk/C"e`T<j[A2M/ /mgl~E:!nYpG5-]DŽ>wPBvr7*ovg D=gs5cʹ*^\w[6XV76SY } +]]Fos81r2[OGV8Sn;3=y8c救a<moqu{=Ca/hCwLltY8+ޒ$/Ny2o^{?E__:}#.julsM$6af2A n,}1LvҾ2 3BV<؞:g5e>E6}1~\DDf}~>Y 7?߯a<q] Rr,0WK #Nikpqa.ǯ?')CPx&#rH D"`1.r!F0Dbl\bLηtT"S$HBS!(׹)E/2,=z\شrǛ6$YDxCyW}UE_8G ,%IG"PMZ9NxYbKO4eS٤V)ѹ!qHBd 6XsQn)` ҒvvK($e* c£µTlc]ӯ>`| FKl* cq hJ>!O\ZK:WL]qѕ'cE@ĝeQlrr *GI L#xKGWnFðb\CvvLtRC%DUhy[ڣ񊠍Gb D%L{.ĺD ݨ-zl~.:[1)!2 Qh((ʃ2#༆(G=M Tn89%xMiuvLN{v(*^6*?5*i[Jt`%#˗!}. ZHYE!%P φHpVb$[$[V8N͓z" Ěf<~Wك󗐡}xzzC-I`E~`7wȧNbR &N &̆2#@'DL褍|}b~nv>'eGgGv7-4.0+㛎oZ7G+ cM7u8_8=>[Z8bwnhr|)Ke;|3;taYwW'f)^)j7'iy]15ګɪtצRK۽E-7YHۖqNj:)&)$aE|6G,l"Z[< d={\}Qg}tBVG܌_,Μ;<іF ?)-$7TFHD}vF4umQ>bICPE#k%a a:*@_zϒ@t1*-X Yg5_xln!CWPlPD c4X€@L2Sr1yP oJJ:(_D?{§\O^iʹivB[*Ւ*$a4r#c2AbPOIKmK<%tHP9)QJ՚p i!"'Ԩa;QDp]18)uCx5I,j'N%ZhIRs _$h@nuV#{*GnMb$r(.sOeADYm>oPfF+,wxvHc ZJX4W!U=dhɎV5a'.D&FS&t4>ځWI_ q1M%*+"@պ Bm`+'A׌CB3 yxga! 'Zk\LjJ *]AkM'6D9q0;cZ F8BdiWaQeC Sh (%{,G/\]ݥk-JͻJd8|NB hriБF q4tjb28w^͋WZ84h"\ɋ"2V"TOL3Ub"`NEݡ$甞\)(\lgT_.zKFi#V V()5; doOJT@A!T nV<''6yE\Ee-O Zʻ \2{Su7 !^^oxȵn:=1 RyiF)i\j>s7Cfui3Zg?rvz6jz%If͏m;w"鑣AзC 卍M #b5>#ce$:8p9t |]C0^/z[^{v>obJl3O3 k;})-[yUwiLz͒nw%,z4eMi9MF.٫M排R<{%nqZ^z6G̛4s(׭|yxpu[Zx$}-LV3ja0߻ݪ6?m`ڸtwYFzcp %Fȟ $|Ӵ^ \l!ކ-1Lԧ r:Qk>CG0ZӃl)ۇΥRUʲJYV)*eY,GWʲ깕Y,eRUʲJYV)*eY͓*eY,eRUoeRUʲJYV)*e7[s=(aeJYV)*eY,e9:j҇݌`݅?>EF!L&p7YE"TPJ$(1XZD JJ $p n UhJIJE4SYER*KR>Qb̒HQK~Ko@!!mRN(&SpˆIH۩IjU}0FDHy3Pac=%$$T9zc$4h_CN[}*DpiԚ$ȅ\&V Q[%Q}h_qfi_OK?4uoC\y%HeV p |"i`O>eW:v< _u\Y^&\JLkAht{ܔH*єjxDrp,bz!}5ˉN- kugۤJYH- h6s 9%\M\p^%C)V,_ȫ?VtGQ:*tJ٠2iL,DV,9j8P]ʌќgՑuԞ\2:yqN:%]0 2)/Lq9zIw>}~$5c?kuąAhO$vC-Lʘb57IiI| ̻ Gdq[n/u2PS4;ow<;vrO@$-6$N76c7j,o1mKO>\,=,sx:k=W]}VyC7n,w~fomOOGp8 c\X?eRrO 6͖ˌOץDT4!C< l~^fN<inN?y#s#x E@3[IڐmG&J=Ⰸy_ Q""|ጥ. .yDr,'>"@ e:!eR\ebb,&҇L|j*]/Uu gYĈGEG t&&7Ёm!u#oG4,;?wx9>Ӯ~}[{u~oW)KC` E%+c/Py>3/r)\IZK.䲖TH"HщeĦZ 11 24FaX3%sT{*Euzv48Y t7zR B7c. r|Oٻ?m6ԢЏL|(IQ˒?SY15({gsuWy rצ'[o2 Wcq;^AWZok08aǟSObiCi5nH˒ W^Y*S:O.O.1klW}p+i}z~Nڴn!EKهg%M&q2==N>w H^Ⅽ6xt[mI2 `h%ZOC.8K|wrB xЫs[^1?:%/dPi/OG؆yGpYUwA^? \hݦogѴQ@˦u[߅Ѱ̵ƽ\gecXϱ_CZӗ+iJ$oWlvYr{I#jHtĴd{)[ :ӿ=r=(t,9]E:gǨĝ @4yV*%$[Sv%R /\(Kgw(pf9 6JkoJG޼5jn-`.wզqaZi08bY~A:B(ȖR~\1W{e~0yeGZ>j͞Ğ<1R}!&8Ι49QextFV(Nʹ\ѠdVQʸnnQ^vq>9k.NXEtJzkLQL Xv4biVъdא[h⭓Em^yI]g2"hȕE,D- j'zMBE?yYȚDC_6nR6|kPŒP}JSr)B!jSh%c@cX*D ՈVgs.{g+(-Tk\אt6*ӽl)g)C+%hY "S=LZXpl-|a.;m"D%h̘f`, ="@9xQrm nh cVkJqCi0hn=#Qw!9.I$EXaQ3 R[D^%9 Pg%A3bR.a V[+D [p ˆĊS7yN\HWL6.P$e㼢zXGIe6Y"! KJٻ͍"Uj|E\ZJ8GhDk̭!Y)lw`B۴ߗZgcZaOTQkx34@LT"Yƍe sOFa Ыf) a %<лDO3meiwht|F e"e ѷ<");GeMZ+?5q)۩rvfaˉl\jHT8S@yUE( Ƞ.*,i÷2VIX8b Y{qAސfZO}0"̓ Y.ԉ|`HN_U c#mYr6wt)A S9;!\v,M>d52d9 `T b$ "@2Ad4!.ڍ@)'} gZa)7yHrI;(7!2)C d4 aJC i} ‰gGT5w A " -2)ЙFDDYXW) ~{L pP%YI !P?{WFd6DZN1u[[[e+j+y$H! %H&<#}y/3;%JE(U]WV1(l ?U DbkM]ZJx5+(IHlB&yYV+44 X4sUV 5S[2,pZpUWuD8M2Xw]X~#e<k1 E%eӥE8q"^dmia;V- 6ά(o9BT28[nƻ`p{d?ĝCLm]LՐZE}+pmpe,{%!z=~ \Z<$ xҗU#D ^H.&n1!Z%tMކ /(S`a @pAsA5 hwX@+f@ZNmM5ht=Y{.65I8K=(&RPHƀ)E+Y-Ƃy9ye>F/~@%dlA'R% fcI; 4]Ү+, f,yՠ6H*gU^lYgy/zX@KPH6 "kƷI'y*0^ \n F`⸱A{FP-w״kϹ1%(Kd0umMW-d0L&Sns`*mi XewW 5$޵繨)UFjn0uS~wC{ofo7b,H]U.2R ;Z;nRDUZ# RЃm1Vz 떍M U.dVl,[ȍL#Rcuz@pkOޙ˃>0t$ƀ?!],_Y#9|gP*?샩xYk$}(8dƲ6w tiY j3^\D{/O#0%7aDF,yW)j2zJ&`ٶ\wR [0\x "i0?nigC1IXqD|٥P#HhR`*0"xŒ.hI56X~ޮHxXd3-#jWI>ō횸/&[6Q_Lxa/Qۡ 7Tx* {y竎h:yFiJ@[4U@鞃9dqѳVӷ@W#- wc3zy=w li|$jl.j&a}Hӛ!b<ݵg}'/Ϸ$C36_8g:*;Nzzs8'VmtIWmV6놛xS0Oq)prN;NzWqj.N#9un[pg^;r?̦-twq=|h{? w8ɰD}u{9.׼Cxl>-Se' D1]lm&:sa?Q.V2Ku.ԥtPA]:Ku.ԥtPA]:Ku.ԥtPA]:Ku.ԥtPA]:Ku.ԥtPA]:Ku.ԥtPӥǭbM>ފ?Ij}_\PTJ/ݟ-.h`X5;"t nsMEP⬧"ﰩbyMEB^:xS.g>hQzH l]aPV!mВ *|k{h3g_n/iν~32!},?)[?cL#pH ObDy},Rò2Z9-/k2, V~ak}aE:q_?_NG텽8%gX8o{.Q%@D{DϬAjU Y<,)eYnff;DEcâΞzE|·|kκ3 p^m6W%ry#NS6Fouv0G.RYYuE&HglR?$Ig]w:u͌=vw̌ٝ7Ʌ<㥹Pu΅rAsz[&-?ܳ!<^yZ|&Ex34A%.}`9e'tD؜bruX_ZfldUXHMilj Tg[AU(_7Ouٝt ^Ҭ;gmMYlTU5[Ns U-+Eȅ%#p!}!檛ԹbCꇇ'Hbb16"""퐑vvd$^m;evPئrï:mV~L??᰸G}6GIWQ1l@vr 1d sS(h9=J!fK 9YgH%ElbHM"~a%*3gy]>|L-N;:i|>Wqqۇ|:9_9bfٗ{㡧B.t1Z.9ʸZq <jw8,]1mzNlڍ[f`EM\piNqy=m,"wy2ya_ޛ''ӯlM}M=5'lmuHlΘޥUFݓ꾀tWxzoZ#X ?0:*+Tz+`&VÉii T4?g%}`Ee/RW6H)DkrڣPs}Huu/-< <120Td#]$^ڇ/@NXH^_ώ$u1|b%5vLe(Jj\gPE,j5$D 4_t-}L3tQV !XEj'zg6EᨑRJ Bz vJfOY{Vcެ\ DͳvØѲt.+Gs Ο<$p.__šp.eYu.7D1Zdթ%HnΤ&1%2/mdN SjfRg@,,iBPcW 9nJOH.py(?Vj \c+|RU})y7.ȶ\j°(HZ)Ir_񑴲FJts]97T_X]B'#&:Gp7svGF l:#>;:r5g&8 c(> /4:zL(xC 2X!xKn uB!q<'T͒Ϗf~Z9Dd2򳂷$ݷhX>]5֌r]XD$_U6EpgҐt>Cifr|KA$t5^8uǛY`W) QC1uchlPi $= jAᎂcP1~6\a݅12 t͘bLN2I(s0X)3.hLHgga8TiARLp59o<61jCZ3(o<6ϭ$<D/HuiOtN#d䭎{9҅e1@&F!D] -j$}wOpp^ϸ3 \̾ \GijFhKF#^ .~kv1I 's`OeP饏Kf:ߧ\WS&K-yK' &KkZJz[ Jrd쒝}4ZJ.㙟NsݜMH ~>Ñkf8+GaȜ&-2c~rA$7Ђ 6z0^Ҍo7Wޮ;gCrKb>߇<~ ?Nx MbyvJf8-UKaJ/SB,9]ynCP٭=1cNn(Cs4if!|5s% *r13ƹ@S<>PzB5|;B#yS&.qLysEoQ+eE9 }SeczT={o1<흣/_b[33VѕBbɨGBePV^B42xc/UwlKni|yG!O!E+[1AoyPAotcLIʃ7KTUiG.8i`4V-ӍV6Yha~YG5j੢x, }14Ah&3hţ H)ECN?kzRcdUH*ŅG<& &2p!k47<~'P ߉z;QZOguqT#V*']r )FKBe N9N?::qD8u(pꢆm"$TJ(4uLA8jP\ii1*A+i斆A%vǗӦܧ?&AjU+dG?ODl]ƫizD_~5S? TP)CZ$Bt;tN:I' u|p *E`8irA+g']9k{i=VYXf12\d3˜;Ii^V#gsi3#t_Mn'+& ~R5.ootuwܺÄˬQZ ?7حL9<7[[ݧjU\HڴOK_SY?A1{ӊF74}2<=-xe-{ðy5 idC#9>n;̼6r3?f8-n>Yz]xbw;J޺ey/;_/YsV(1O:OhEk[?SlRTඔq3^y=x.Lf#S|} ? &W>`G5\2 9`(8^{8',T0ζ6ot#Lsx ;xpZvd)],a6" QOI~ uXk=S7?[J|^#|s'1F'LD͛b4`H (-ИK$1%ȼ-G`Ɖ5 EvUVE<"CI[@sJfch5r(~dBqr7ix `GdO#QxrrT5}U]HB`L$lQ.j>((BG'bmMN)5Vy26eI `,+M.1ٜ,sgRR5c5rk^ʳj.ʺz]xP]xk+e;.{0ƫѴ{g?]6>}'\3!cLe=D.g)&[`[0)La%:r/\R+32ϓC$C(&HmJmJdK&SǬ9bf5ٮ8G}U{vcg<S*J 0AIU`Iޝ1Zs%A; ZE1XŁ"d 9P%5 YG,H)"h]A(cшc_+kDk^#PVrtQ. )D Qs#HASZB!B}fI{I%%%BQ~$ꞷRN>RnS=Q_@LtYeOvt^ ܜ KVc{)T*ý@'zZ¶5)~}R% TZFۀu >*gV[0}];""N΃򯏣ftclM oZ1&0girucc!JnU2i FJ9KnCB)w\8-<V"Ȭjw>33Ÿ.? KWs]B;~;Tra_l<\1 ̚.妗5_y ]>]>St֊MQ9>Ӵt){ɼG2{NdKh\zf≅be;^m:XzWUSn0l'p;m Wdz>)rǤ{MUyM`>g67[[ny']0du.b4^v,N;8.'1i7Xj536#87;f.x0u֌u!w{_z_gD6Xh@aip8᭔8}ryY^O & [(\6 V=E{0V3Exu}y,'hƩc u^W;B9*}u2yN9͝KIR<޳OR㒷 jbg=ELw9S}'6Z3۳mǁL# i]p |(bb,+e`ZO$ 7L01&(%68ƜF 8N~^ɁcmjJ:ZS1C&2: ٔZ֬ͼWzz,.?:ДK@/I^?Q1ˉvY&1);:2gI\)ڡH$8,QYAhԮjT%W(&I4ԁEFⴌY !ݱ$# ںpfC4B hu 0A8aǤQdel:drV嬕o?-Ԑ<,ImO,N!&s4*|BҙZ(Ȗᫎ+UQi+YƓOxvRtժ[Agt]d8grkI;'0e,(1Ieg;.2I6T #.y,2#::OѬqҸ@kLhm٫ ̂V"Y1 ^9F {Gb47ezH_1tȌl`z0`0X`m1yDڂՒ7"%QId׃b*Uql/?q}"Nޮ2jyVEfЕ 9EB8)A ԷI-f*1X& R$Okך A"gnCIR#)XĪ)S!Sp^&Վ*Ӌg8W/e:/ H >M >uhP8$8xD dooS1UkC K \x5>pVɋnQD,lF;H *֌\X|Z"w)FKuVT`6#Bq-r%W&~LgGG54"dt]lE_w[&}&ڭFx5\8Nh:# XTynv o?׿7ͫ2oykxNl `- oPKʚuAV&ݲ@Q5h$FD9M$&%HLz#pʬ'4G`[}{ @=jr{~}pg;5MH>y~ĦyBO# =zgG'DO)OW?Z`|ž=ͷʡJCH B9RYQc"m͓ve-f/ᒗkqS+ݱoP7G;MhOr~x!-BûO cAHȰ"M9R!1.d]:(㎂x Tɤ K6du)5 >Z;@g%'M8.(P4Ae mw4ևgz߈KXc}>&y"GiSx5N^KP蓝Kad')Gm ?|W&+H DL;*(d[\ʜ Ƚuy0 od]b =uJ` ,cRf(;U̺7#jHQ"KmSiI`m*%m-:-M~ w#g[>}:䞶t~=>7=_ |džw&}/}Q gٗfvK+W7=7W %r+i[]?B˵d Vo/rcӻ5l} e +%wotV0ڲ=,y_k{};6.Q㜈vw}ۂlH[VޯypO?pbp&K"=)xP9oٝI]\e)zCGw7y9d~YAwc.^39,2aQ O Dޮ(;I^;h]b Ժp)>ldPp0 E坏1A1JP*ThQ%D(jȌV83CB%佢ܻT/rQnHʳ &u'C4{zUewʿS6bJI)S D*1&G0j .Xk [ORPCPVaLЗ) sq٨5Bzkn1fb+J9.3M L𛣧-=/lSÿ]0ċ\ckQT2kA)EچJ.ڔ8KNU.\첎&M&[AC"#6-A&_S\\U;Ż95v, ^q7>7B5qoջwM!.dkbgW.?Zx3y_\ hy3o?Vz%?_Og$[KdVS*``Mp&H5x-o}jbmպU7ҜFmX{a]Z?(]й Ⱥߜm.lo\pX~Gջ-DKV쮦LQ+V]~Lp%r'|D2'tYOAN,XPP&5LXz_4S'ް<&p=g";Y:r[˖N[ OW+6m,- _ざ^X߮wb=n3 L2nóndMOx;-,5x&oœy/Tz`=ɄE_ǿ޶;nx{NEͧ oIO nTTbG`\L*[?Nڅ(w%D6zQHX.c*<|KL7x%}auըs<{}jT{(W4eQso(W؇r5*OC~¡\֢vOeVE՚ɖ2iܐ!'ku:j7WFFq(F7q}`MU3+owYgLpC `bk(28Tb*(X6T>yМ䈭吘RP* C"$[ /nӍ=,7/7ΓnAl~uOې⏳i|Ŗ8 ^>Q3w|QR8U Wf1" &zd5X$$wKe+I2[u뱖Tn# UhlЎSXU`B[Ta"Z m>WZoqA$NЦ+xe*`ʺMEE=9تlg9FΞr?0jKy{Yګ8X@!%j0 &ևTgفT\;$=EӘ?z=}bH%\[M dOZLEF3AcUSF uZݠVOqxRRpY<0F,CJΉMu!&RVl5\ŢZ'Eq*נOk: Rx7ޫ)nU0Y<-X4G)z&w &]ĢG6mg<6yѣxӍEoj>(V|@VUlQUtia|B6_$m*Z"I<lnrttNS3A?Λabs#^Y2,UkՎY ѲV Sn܉rСݓMjMgwJI8YbaȊo┒-))~D))~JItF 9V>j֭8 PhW 3>G%`,iSEMfUCUVz\Qև$P0jauǙCS#;n1p8_,j`O517k2xgUa]!6EB8)o@|܄5)E2L%6F/孁M2-)6;! J9@ٳdHJ2V9*!ruJ@kۘZSJ5Nnԗs{}V\MޗOuFf2N9VIP4][w+d ʜ@)*km DOLJ< ,ntwAפr#)6%B&zGGYwwi!8zvtpt6r*x{n/`NQi=őB>3D.;3DCgj x8|.7 LcAĽ.%g3|%gS4a1,$.(l` GxDkQas`b)="'BLP.U:]_ɫ\WU^7Qo-LB4JHH3osA#Sk ayΠFH}r >7l{{fzu ٫(& oKkQJ' i?=lxwo:ܥ.AҚN@v(SM>*~ yEuQǠY?w$8 t* p9'<~'[ܧ9/ڍjeJHsGZ[ӽisR[[hd!Ev,$p"^~er `ՄSoFsfL{p0^7Eh'0܎Np/L\S"/b$mjmue;^g }mu{ O&gSkUvYV މe5W/P_9ykzzzo%m›P0 t|mCoYֳ7ʪG|rFN[j˰l~oNwY%Bނ2H_t>>-xr'k&W+vދ{$٦t+t]]/jۨnrȂҷBFR/ȅZ)ݔNOޖw{ۋŅk.",,~soqy]?88q @GxiF517@ @ d=JJ[Da qHxHHS##\B'"D(s" K xib6 k`TݖG"`"RSFDD b FрG@2&"pȩi`K[4n)UFi}La]?鰌CwMv=v~ykF@1r/#AWdd)%3& D#<-4*kc&Pf#jhQ9b0XJBԀRE95x xbUR鳃կiɴV)xD})K;=-ƙ\E%(`KUNϕ,p[w B=| #)BJ%\XP^y'j((VpR֨!ca[]'xJz=^ }]29ۍFPۄWw6sW_&`ހ\`Ľ%jǽ%*:ƽ7Aa_|I9 U1ҍA6k:g1b '1_6B߃"}nj-Jc=BƒNjH"׆H$ e aVvHc5$":8 3촎 uU>a/FwM.N+Nߍp@4 +ϣ V2J.cFɆIGrufuTxGDJNcRYdWfi%SE-۪#>"twT֓+kAD"`ڤޥ*8MVEOJ'ImEPPQD#{9"X ,sB QDAW"!8hQU+kM9aW)cDfOL P")Pg)-w_,ŪZCKY'd=;u~ j)+v\gV t2 }-B $>qW饤 þqYڃf.NoL̂d ~^$ƖG-ɴ]Bo3 @K&iQB J1u-3k۹m G`[ɿJ`HVtb oI˨Sr-0>k&}NRep g|ɺkqpzn) ɘKדk00^)) -W-r!RL —X+.a )BW'T!9L~ivVNEb哟onF睳s=%|kwc}qYo6@Ժތ&痦;en't_3z'Ɨt] FWw3/,A} L\L+y۹0z8}V z+:La#co!̧SY<]cehTL|,kwǙk/A.o޽|zwQp)_P% zgb_7Z657*VkuMj;m7][ Sͷ7]=,L)UqXg\6 |5.φ̍("Uq1]2|}""rqJK.| J28HR,68/܁k66p"\S'Vc8Ll^^0p |˃|N8DE c@PsU3KYux0n;oeFM7; Va=r-њAۖKǹr3E0J`tȽP̧.cs{=XM"NNOXr4й tt|{W9V5T}C z[c"O^d}YZdۖ:)qډZ4h:+u lNM$&J̐}/1Rc%Pb] .+ZZnN'Ez?S e\^iM"&^ ɥՔ)#k \2f]fIHTK=Kujs}, Ud *b£U/¥Hi$ɃtlU4~\MwWf;1G.eT)@]1޺]U;zu{ 攩1LRv(Z=3Q)1s* J1W\\`ws<ƞLs3;w4NPFh,Yv>wNUQMoPFr28=wQnv`rf9Vfg p|Dm?n5$Bܳz6&vǗ}{dl:Ǭ2Sc)0r"E$ >6p0QyX͟-Qvw̒S0[htkո0d65bDaRDxkmm-!D\ D-{$ 5^a =f3jCrxi3jys+}4W[OZΠ8@4 ˻_{œ X UƔ>&g\4Ro"WI6_ j& Z:I+A&JRb$V4:$xv`@<{z)Jw?-ɍ/tr1/z9sV?|(k [32,^0"ZR],BTian9BQL.Rf,$B:D-Eis1;r5 [kvwa#~a|;zv鮕VjAVo:hlכmKVp$$jڣ; ffJ,2/.!1{#n&X>0IeP-` d׳ѓ$e"jxTq87H*U[)r 4Fz  ua#-3F3FϡHs<[- k"0QT0q: %dp{"e4.hWT@EYGCL:&hN akOWKT;sLȒɥđ!U.t[/ud8_e)LJ.hG?uhabIǾ5n?{xpʋx ~2meڏUōGflvU9.{ԟZ˾ @UvUUUAVufs6!wj bHlׅ}w~Smw]}ȓ [yX+폼WǫTւv BM-] }A"*4IM…PoV^teZ|bO0Pj=MNҜvKЗ Ʃת0wl٨7 Ro$B6 HEd+78ψZV% $JU CR-:QA6sҿO>sa4zP-,F`ܭ_HG^.h樘bO17"QC{Ȋ5*3+mbhԉdc]W S]IKm vd#oU^ |b7Z!Yhc5q8ReD^Ҋ]Οa# 3g)p)W[ny;] JҪPr]ӳ@;1P4ts1wSkU|)&+^YPחai 8:dD_ʭ}h9zИ@yU(j>q86J4ȍ6Sd0*[ʆ(Tv>Sn2Ym/£i! ϑ[aB*PK6 V ~#Rߗ^G*Gce&f>vg~ J$cuP]*t㿑7y,~>a{!CO}dLHC 9BQ(h0XQkFkrHF |c8ʐx}p֎8oty0/mvcCa#d#2F'%TP4k/$1Fm+t,J{"C/=õ(vb bE|ze|HSӵ`ٵqR53_W6ut\")pP˱X>pm`IIGJ2FR4P|g ,#(myvw^x2Ȣ#!([g+\9A*foL*D[7l5h<6aӓu36N%ZsY=k-TuGtt$tfZܢ/U ",'@dWD )_x@z__ /;}ξ>mF|¿LY?Z!_iӵϹ?6[g;5/f酼EtzgVe)1\ݛ N`%z ~X>P_]ZCz]n=[.ͲiMnPք]6}<ϋm߯_ANt|i|L_'rk6|G G| Kzv3Zvy6Yx=D`o'{RZB9w>D]Lt)jPQ q!sB;.̘Czg{~=IW$@&I=P[QL k ddֈm^FLq RX%ZGZ+ X[zA%Ժdl9 P)l8Y~Yاloq;_Y kק5$MUfviS Uxs1 x6Z.JYI E4@ h~fk QNH˺m8XbI+T^Z)"<9R!Lt5ͶmD V#9dvހQYڦq \qDȑt$-׍tvԳ^}{j&xDP+3.* RI 3 cB֒bjdҔ,ZGh$_忀u"I6)CB'6{4J:4D.V(I[f*#j'qҍ 9z8cUs>h-T듑 ;A+g ISaØ0&cP@hW.)OWѬJqJ",I eR<ġM&p Ƥ*Rc ::lOČG1%{uLj:;v2MJ ^*VHH т/%h t)D$]ے 6H(&w%Ivo1IWB6SX6]C"rXߍCf k{B0׋4ؐt J %x%TB Y$3^d Hf*ko,d[m#ɠ1xt J >ЊOdxiz]SA:0afbG;@Q:PF&Cy솗%I|El1C#j6@ް Vtv\!A=덯>]tc/ԗ ?~熱on60<c[b5h(gͧfT_.uKΙPߝEcB^v$t>&(b܈fB.zAςjs̺"77&_ή>L/.u%'/<C:-xMS(ir0h6 $}\7z" x!zP܋[F{{ĤƇvD >Ay^XHh]Lx\_'dZl i2m^nzG|6{bgGZ志ξ}Zt֟z^}ҽn ??^4&@.ވp-eK qA|N/xcpJW٫tdwֱca;r>UtKzk3z=b ^`|68uk֑riJ yv>aHuE<s* ,E}(jU[aՆ8jGUrpT+!Ց 57Fx1 t.]s:;́67"+ 67$ͱcNxM3wFrVDD;($(5FC=My Fv&;h]69g{Q&XT1>hQT (Z$ +cByJ,]qOv#C"X7@nzzX]1\!~$˦X=EF.Y'KebyTEDe[;CupmsRhI!]S : `wXyGGVh{"=~TDc3: Z݁!]YwOX*`MKN]BZ]&]"&tZg>ZrBi3bn?깥jnEjwqwtfsв$[:G:_6 Y>^>< T3f1~r:ѣ1񪇫JQle@&F040e0t+iwp18oX)+:֝uu/΀xׯyūSL?N_Uqpi|) zc؈ףwZ547*v%j ϸG^2m[]7[[+x7_vGui٭j:_vM~  tQQM$bF;(!S_K6GG@zy g#(U7H b AT3pHpMڂd*J0żw0E9mQqìǘÿ uH8GNr&bhDe[gN+;;Us5NjN}ͿzYzU|KFSB= dTw7!\7!Zw=2Q9~3\>KNRt*axo*վUh*QsS+z D \%r"w\gzp%gl+ #7pd_*Qm]{s*fz:pjs£\׏7\6߇ќvFJ 9=D?ş~.ql4%&%%rB)%$VW๥x`:,t"W}Dһ@D$ib*4F(\%jJ-2\}/pE^=E)n?`g@rH-EWr(?\m3\1RҮ-<˾{;\?կ?6̹,oS_sDMQI0MyixL9ECPƐ(P:!Mn<c&sIl'ݞ]-D撫1yD>q1R 1  Icl|`ָQхq5 EI-""vR0&kV` $tHu>HA)cFpn{@zlM*rgE-6`M@ZTa1U0!]7-#~(su| Ņ%OM^i.RRBJXwV0ʉTM ç g*U'֓'ŵgjZufƋp4&~]"|cᬞ[vQ7WXGg?^?g-I#1xeÐ31AQ0iJ>.={zQ Z=!Y4s5j]Wz*#U{`_Cb<ArN9]ѩlZ˺p݋3`޿=>}u_o޿>}uzϣ/`f`\ r\Jޘ?6"@l2m Mch|f3.m󑗌{[|@lgVmJ(=)Mݑi=j'm:]u⪐L~ |V5]TTr*4UW@XOX~vTZ6&x{J28HR,&p@5 Ǹ \I-Oa=S++{SD`1Gdac_Kb:$#LQf֙('c$u6ל ޮ[;z:ԵG5;nx/r,Xg H - R+mUPCRDžH@.(qCvY Q-VJ\˂9w j2dCPKA{` ᫋]Z5>ҲݷxxYg8Zo$*}=%O`4eŝ-;w{|-OuEx FQ&NGc}EyZq*2 %IR>"\#}ԖxR(5sf"T9w#c{JkXge,d 3ҳ'I3>XLR}:xr䚓ߋ'nP^o1bcA!Kix06;^靍`c)p w9nhd0 QkS& fh/#vHHLBDN5rFl;9sg4ضc]-6Ϩ5ؕHS+ 'H PC D=r`Tik9waeԧa7ww[uQ"#bFUOKnt(NtL[l9D eX#*ֱͬvZaD:!g(1pz ",iP΁%MLG:Kۮ9w#◳.Fve\3.a#'DXa^K0I *0!mT - .FbjJc̸ \|Hx#Sq%f./@ .p|>(c( {.OnTZZWaGDQ{efnm)v4!r1[}cJ0mB}bj^t_hQ err%T8 yS!kڧd9D )XZYh6!7irS@;Kbn:<=̝/-(tJn0,+y?QݑJΒ@2`f|x5VODg QL5ɉ2&w  %yNTCkG5jQMaS =FS8!k,Եj̥7O> u-=3:9W6%R' z b$"b湅llF_>)'𛒂/1_]?zy=v+m'HI`v.Wd #?cMczYh"LZ̉IiAuHj]'?-d*pL@3ŕіʹ^z%b!M9 {RBE΢?1w(xzXS(ϭF'[YEbHb o%S uzs6siQuz\Gez:mw'gvsK=>Άn%;³/9}P4q8i$bt X} mVG"BX|TS. 5*V'Q- ] =7+.CdA:Q!۸u!oSxO=Ă/F>hua1yڐFf}(ч7g}݆{_=owO˙^ŝϦ[뿽i/E-Y9KVےAQ!y@ۥ]jܡl2kNfOK +[KR^&Bu/>C)ex**y*IX%Ӹ~)(Ѥ5\/>: T;L3LeR[mtbً VI3v_4%@ojM].RfNfmֿ>^)o W:4|ZUy#Y#zaLJ2Íq v̉},#_hmӵ066M@ SS-wm|y$R Ux>'Vv#$bR}^).TSsΚ:A`gl>yB,1UÍB/A{M J S,"S"n6s6{$ !܎Ϳt4pzR')۽$s;:>]tym7|Zoh8m=6=1( cs;"6jsYl.G:Sl^ w6>0XZ-`.PmJAK-vutW.g٨Iy6 KN4>: tQBQ9q69[l?;?PW6(ruBm-F/Mg(zFK Bj }ZNA(.Q3UU'#N2!78T6 M"fZt8>VRyNC)!O 270+_yTo18f!j\?ۚxb9׃ެqӾCw*ޑ)OOj6/]D=B(1-,̣g̈nA|ӅN=}lKࣙG7FB)6O=oftґPbVbC%udVn t̶Bg$Xk>Ϗ2 VBbT byًM\<ݜg*I1D1N1bH)._Xa RQz&Z\5x;=8msØK/+˜.? c-PRB.5NYRωIyԄ|b楸.#]hSi F!}{ w Ņj$2 HdeJ#am}o$bV2H 6x@t5 `jӡհ6}+2&-UrHm``jK|(t5 t5~7HWq vDW`>u5 BNWʴ[Qf⾤$ֳysRnZt\*Ow}'7879:O 쟣?]_]nù~G?xAӯ[Taan:Ui}a"M'6'}?]xV2rۗIվ%h7=ʎlLml~y~NFMl.D]{=bM KE;f]+e.kC 1?]?z}o ݬtƩ$qR>Q4.i]JTsoJ˷wgW'uM7ODTixڟu]1ίNΤ?[r9P @q(>~ "1XK1詛C ;4UYz>!C o/b0YQ&)L)>9H)VQ|:% )LD3僉h1{D3D4MD[>z$q;1]=\wKWa7tH+ AWS=JODW8&<+3;] +셮]0xQWzt哏<ZC,$QV BWo($@te'>u5 CWڸt5%|teHIZĽ] +-t*Ы}.]0U_eO/|-f݀o 1o'g-wnsI@eqZ(`l0%%RS1p%ؙdA "*6|b楸.#]h=^'S[#@2hI(wΕ\?5uRS]>fO)LDSQɓG&aᷳrW{ u]o9W|;` 8ef/Lfnb,9sݒ,jIiYI:@GMEvb="o4^mEkOӞ6C"2\=<3۴Q5bQp\3IBfPF]GRNwY]9n  :-Qy "$9VbFV%f(H9ppgx,M!D>0)&D6S7 թ_׫+oY~`2e* [˸hء'BGJXM R(iJzJ"Nq8%FOEzX^3vxVz3rZ=="!U@@J,m1;DH<a U[k4O61&ϱŽ{J<˾N8P {SlOx&Dzo"ײni{ͻpi⥖iW;Rq 㭷w p-ƶixC3i;3X6?rž2jCۚmKUv@ٲ-#=N=3ThPo7Qxw_[eŷxWp^_{4[\Z+@{e-f`0>9ԵtЮ*؞HBi|dBQMW76jW'Z*Ü˄:@WK0W>J/eH:XVYpԄO/ Gz*Hez4"G.#O[~ 4G<4XWWOQ9*oktjf{{yt(Uxg>FW@)&<$8({vzQ™E(9. DECx 'd)'ePT<#˧9Lr;å_z]qgYl =6}6Ɩr;]0ͅ2UGxl']P]g#Vs[ʡ,gNrK{} 3ۏlVvL_z|9YqX-1 b@r2) |Ơ Qhpŷo;O)f=oy-+=k<|=y0Dqw> 4 O:*;/@]OhW10i:u0]O1d@}蜥rI!o@qwcJy4aω|ͺ[PIAR 6S 9prb7)rx:hy$z0!v$zY7XFSިQQCP J*AKgb."6G GT1RHe7!Z*!oT0IK֢,qL/lB`iyzs%2{"eۨ 7 ߗ,.Ao 6[M1[D7T{B1$[ s&W.6I͟gwM.')5y?ǔD)5.Tq 7vI9{甇37ܜM(/``ٳFp9+ w@B6\5_opA|x o_10 .vm Ej_uӫ 7\,/!m#i:GlcۇYf7# G̫Opp3_ti'6j\Q'F{$,}:WeszF{cM5JS<񷦳y dǿ_2s?_y=:B1$PY;C׫_1.547MRR->SnmVhaq_~OOt=5{0An{q$e%ӟ¬ST eEA qh o/UGzHd.}܋s F"FloSA"~f8!=HQ<@c[2S:%$P{:bDžbNH__]y *&lvS'$D^P.v\L`z}u*eϋġey!cIPDRBHZcLH4eZi40*Fn`.-S:jy+ͣf[ Np/\e@)y)qKn>;DGk*ʛvpPɗlGS)Hˁ l"2 D6zKR" Gl}d|t =ϛ|%xK]D%S1`p#'A$bz?\I{yKAJrR2BjI$M9DhQxb zAyO,NH]r"HiTaxY==5yN7uE_Ѝnb UG[k-]yG.-o&S2;C >3;w=}⪵.Dn%m3$.Օ@h{׹frdߐ2kc [ϭq9. W[Vزqn[;=6p=/9]{{cnϏiKWwt<6O8W k`M j]\0DFP#&&,RD6K[b0.'`KqǡR[ڲZ%O$2LQ #nFWJ[AْD0x#@ !EeMfB92q>&I@+]a9aC`<X?DTD$6稊e3'y.@=ReI20Kj) J5H)ꎚKY@Q4:k!5K"g<.ڣ\=NYK\ԝ\6')I,^;Fe&B{<=JdpXBJcSŸPyh Cs<|Vн?e7G~_7mNpy?Rƥ^x?|b7PR3wqo>fbgE귟YSz=7yW䵛N}oFN#܏^cxq}Q+}\j煿F%]US5ep,PV+,A__E_컙 ~P 2 y (=Z$M;ttms0ny,͋IUˆ׋ޯG_ tbZ^#;p8@կ?Ћ릸n/'a0ag=z 5'q3ugTNMy:ay4#Bؑ0`rNv\q ܭ!Z",_ IQȦDc43ǯzͫsei>\,*yֶD9ՑL RBDk*a "<%3B$x;7uxN !!E8phBtH5o: BQNSȉN^oz5Lf鯏]H5Uwl»;ڀ߻{X|c:Zc”Nzհ63ix,N7O{O`tvAN-[cBJu-fw/@!#<>$fnعn1:wo`_: foS!7w* Z-  "Z Sex,\>\{$N0O0WCF 5V=if)KD{h’`B`IAL2(gya9+FΆrV_ͣk5<,m05ѐA!DQH+$bBԙJ˅i@5JilD|<$;LQoPx!:$DK AhG%F1Ke$&EnD"jZa:OZa/iibVPeh9QW mARcG!E!GqPwxb;RCFUI g'* wFI$pj$_E,iDuV."]"V1V "[XՑ c#oFP-ID!9b] x;8d[(WBdܠ4^R՗,_@arikJ $raUJI/z,賲qxt ^R+r!F]:PԒJ]:Pb!\BӦtFH r#=e %9$$5*X\[G"%)p:=1BIt-##gfՃ eă~^cb כ]Hi8AVneD38;4Kd61~Ʒ&76 '(!-#gґw!H')#)qЪXIJ,rEjp異)ybTK) xJD=E/q(* c7k$QC5L"͌NAtIFA:,*1 @i*}39dL !V EJ"<uhd1ǶC~!kO)xA6kA](B)a/e:OVTxt6TRMSh &*댩 `\Q #Af}빳fV'mצdk-#2fT %Z85F_GRNw\i9n  :-Qy "$j8o@1r6(>1Z>)  C]$axż ٧0.Rj e\q3#TED 0Gl4f=+#= uV3! rPIb+/<LvAI臶,=k\+cmۭ";4+H/]%bVGg)po9.%*EKZyזx+gr}_b]{vhz9 }:aoe mB:׻mZwWC#my7~> [23iGw9qUy*z'9ۤ'h|B-dխ+i-Xz)ְo6mo Azޠ^컳!N1z7XBpsDJh.l"Wh"Te]LUR Lt0xwzsX\Z @{e#f`|pkފ5z&E,G@{B rPbF-Q;*1`$VVKQc`p]_jx rGrQV`UU+|5aSATB)%4Oi3L hؙ!|}̗>uwW ~z7;?P<>Z%ى\M/?|\9؍JqC%b`2E8RGAPur6)DUw 6^5`/3`jH~ hE< }:6k11xI^L_X^4niz9Tumz_R^ߙ^Fzo~4ZDj׮&C/ {ר^GQ/{3[_G3Ky qgmnrOWGkX]3G/hĎ`| D,Ori+Fei=˞.{q@E]%. DECxbӉ$%H Ee4R2yCQ.<*% i!ĩ\Q[.JYŜ` j"As pػP|}H;lgBGc}kaNxfv m7lbK_l{uHI0EdL&c2E<,H!*DmS\[NR**kuÁd@a $pTkdPܡP@J+F|/x<MSŹ+__vFv+@3Y\*Fu{֟L' 2* 3UFS!ȒQ\TUы-˅TB^Fu@.:Rmq㴧1k&: 'd)'uPT<`Og rnY`xzq9Kϳҳ`z1h'DZ}cS3`l +Y]1^}&*Ce]YB[爧V"<8s˙b-B^!reG瑭rS=A5_E bu$4BKltLx2LJF!h,1(C-m뮫C*?'ƣr G2oP,#MI1o AfTpi@~FueQm*0ѧB)So4Bn (.hZuՊd6K8-W|&:>;Tj$'\!h1m͛Wh+фJ՜#ޭg]k.aFڜy[jz>;f%߾ b.0 gr{3K"dOWzӛs7\L!m#i:t6 kƶ,P# G̳-h8OfEimԶg(7Ne$,}:| zL{fcE57|Mߛ^_#;ߝ~滷wߞޞ~S~shu30`w&םPG K ͇Fl0&%z6ᛌK>r˸wŇUQ;qoi0o[E٬A^r؋ (.f*U4U jEUA h WojGeg.9ڛ#vD#R Ȁ0 (iyzt|tSG:C:?{WƊlJ~ :kfEArr&N - Ul[Dx$ Uow:N.4f|Fk]0P `iƔܬ6rh-<`gU>r~XLj9[Οx<5L]W/Ǽjazh]rgKyӧ/򀜦Z\Ge෋y!}CFs~ԋ=)_}QMWUkKƒDnlx09ØUwT >V5l}}lv( qdM)1sOA$: %F{ \\ULcjhC}S,DD)c1Dhb4l8( oyI{/|ZidJS;VWz?+AUB}z\ QGV5TC˾}ӻO4ܴ5SM&]DDP Whn:窤3bgWIfTIfQIdKY/7^J_\/VRd5XzC,FyOJ@EB"evkc?7Ѓ|)2 ݟGacP̗xZ/LÇ;ikᄜDTI'rC㫓 KR}[d 9wR,[Cn)(M lZ*3&;^Zs7 CL6*ֱ:>juMln A!2rp̜_W_\5U^mOK}~D_]vL}@$xz ݺOk;o泣wh^!q2;Nw{@0vkI[[^o@_ sCm9iiwƇ;W㡗kg]11}Վ;R}Ot6]>y1|w{9} o{jG]yV}ퟋ>j.@-ɹWd+LV7ktMs|c+l"μ>8ߟ0:DHeJ #kNܖu1mљ04;$TMkb(>UEJ |paøN?>×9rbJ~[0`eҘN`(`4 kґTVЉZ؞ :Z*7rJjbKEIuhz!C1*':TUt~|(ɉ(`p<K^UBPTcNb80ƀfk)_h(dEʑPPZ"?jDFY.$MBO-Q,JK Pkc5]cn Rqp#82 EƩX,<+>z|3?j8Gz_f6rN{âT\q1.⾈7ynjjȺs!r#ki bkms\pXt44<< <*9+lV$%Y8 G/\G툪= LЄYE9%LIaIavǪnѯdb毋o>^f6Ϳ5Y_ۙJ?m<]d_8CB%'2ɠu`"dj PfE᭔H t(T}P-=TI; }9K,57!f!5Ą"?7+a92snnyP)ĝųVn?vy;%ɦeoOg[H&rZ2T̵ƺĬ;u^bUʓ+/pD豼 `x`h=qWfA\+z1b=D\bcVAQB ii1yFvw1~Z)/qwtok+Q/8"V':mZ RX|dni)`j1O.ۆMQ/3yo>vBς<(戋JI.6uS* ҄ɉ[ѿ¥y O\AOFW?' 1-D:ۃylާ{^D5a2wLbǜ=  Op[AK#@OM6CuH_$ Yrj+V1}+15D[*/0> ƿ{H:+gD}OgͿrdxuqz/l\Y f(*gm^|LYᙇ`&WAGՂ>"kgF{B>L6;̟7cg,|@G F_/ߗ|Vd{2$_曷 Kd5d@i͑MiCxjn6{9hwoy]IP We Bf6o{#WeBJ:eU-*zcPG`iUx3db^K gUwJrNyyc~k;ҩԫ)_}^^}ƒDnlx09ø8?u*'zv{ɿ}ed͝` 6;k&K ުeSADJ(u1)}iql׆{"w^ (fKal'~\Ӟ z3~#W\fIҤHU -T\a쎵Ìa^>:XR(Kk"k րQޓ%!,Pzzs'f=lv>e>A>B tfɵzi&>NAN9LDt"74: XEV0;z38 z"PX ^bA]?LA)h2lZ*<&XEǗ.[sg'e@כU=6ȽC?RG7Ng"}?tSZxstz`"q;)nzxamZ=m?=x#b(sGs%?ɞ4ŃPhn/t珳ğNV96 ˧?}>Bϖ=^+E6O86Wzdw$ʔFH&ל("Wɞ&ٝd?i,%D1*> 7Fdp-K"̐! b.}dsR4%*l-z&Z"oK,=0̍!F&I5:kW&Ag@KFN_{_IPl:I c44|pOvՍ ow789||>-gفfYŔfk)_h(dEʑPPZ?2u#Qԫ"KS"Ur,%U5̱Jek= 5 02*ðpXd8 q³{3?9,JN8る .xӚH+|()6?W"78mր)ɹ\<.EǩxH2gݯ1+.c$%eaxw6 FC0.4!0q7F6x΄qIz1F2锽59@$)+YC "D*)ɤ$^w%ƯR^ඪxuÕ7lx4߄Oᗟjس1o^=ß5[)vbsY*A)E;3#gV2`"Ve_2$7f)Lo8Om@#ZIq|ܦcgʓlE["+V6̥{8;^&.PY^-_3='30l j#ҟfMހqp>:/z:ry<Áz{|zu~V]{u/tT*dl0A DP1R(˿׹OG]zޓw+}~seLhSƔ(&irdemöF" j(Jg1 @R'tcL^HK ;b)t~[ܻ|?ԗy]P(BY l󨥑R1zCTt^LbzyK]}i6_V@ BnBscFFg&U݁78zw-,a!:^=h1b}j>`g{L5B7\h>" SݤŦ 6U; m 4OI$,Dk㕨qG*v"Ѣ Z(?C6Mhq*QR&hD<@^J[`b0l!M]7fl:*Cwe:{~9*< nguwyn7JfYjţ1ITA$5\+>*)c"vPTls%s;h|H/l;E!A 1*ZbJJ5:Z$s^%Uڨʑ^IIP∬3&c>l:[YW6[Pk]jp*d+,UPHd,8Q3&di )Z&dh̦f鸒g_w=.OLt*NK)cNhLN֬ (/uFJxCٙLZA4=Nzv&ƬRd~֩'-ӚTX:Se:;ZdwQ1Rv"" y4kvD"t70 FYT=uṝ'|];f>36(VmɄ(f(,{Mlm"ƺ.={tK0֤pGgrWbЯ z65oCvȲHM(y0`( yd)}ih"b )`!? %'h0ኍmIG ]Hףlت99ZBZ^yZZwQ3BzWViku㝎ބ6K@H]CW?χ4(kFgǹUy_i8 FԢռȭk*M~m62}n]u!(y=}[,~ O>MO,Fsmn*K`6n(Kѽוmii&}57st51a3w,cSA[MrNM^a=zkFK4̧#eu"=aխ 8Z TO>$CfyՋt+sѰ9MT,ņ(RZX>2vƘڽ™CcQҳ5I섢(<[D1@N+C% iZ/ĶlmJ*B6%TuA* ] d)Ry* H,x."|{MgERs:vK)PĚޞþ\BSnp6f־cD3X J(ErQe[t.Bɼj-ZeDPI#&"}S` ?6M+y bR A:%W\RP*LU:ꢳkq h4-- eꖀ 6Ekc`WY g?'M:]lGZ#,Ga-rdtxp4߶$Ɲ<^@ 2mB87)戥>|(qH2;2;lU)qGpP`P)+^%(si6:%Yd |P瘄3eP6P:;U%gry,;OONv\\9}+җj^\n[]Xf2m讂޼~"A>X,qpm١f#bI}sЬ.,eCK󚨣o*&Tђ@NX>Wzr Z%)mqdȈK3EYX ;B2zY(&:ŽP7K9nCL1TuY[r1$JDh rOOB_yk-0xf#MXFh>#&˵J؂tRUH2]@5&SX;\:, #?SBU`egYo]^oގq}vLQ\ qQbϡ4?rώբ\UMmK,09 ]Lys8 _n)nz 1Y_c{ұ1PS`|KNށ5^L$M,O7s)$eHBP $au~ GiVrR uR(dʃ0S NC+H+~6]B5ւ5h/m5W{JY/Ę~9c]q*Pi>z0^݌]ٙΦV~9Ѱڮ8Ԇ9a\cG!6='^MQ7?fX>1#fZ+XWj|ГgMozYڧ*qS/驛'/:{`?q>*+`44"f7pM f7;N!~?ӾTn;:Raޕ0eGٛ]`%X !iqM\)q(c1` *^>}_EղiT .j7Vz/Xzsf{k(yU׿n]>Oҡ|,\B:2]ӯlXwz{r"rT?D%3T@M( Iq ^#GkğX ZLp ZgBI|INte{UB+UBeCWϐ(B׆\!BW NW eCWϒ瘉&>tևZ}4@I1k5t)ѵUBعd7t Jb+FU{ݵ"JhUBzt%JՈ06t0JztN>+= #+B.hWzhDfωhD]h2| %mh9Ҵ :I:R͉UB{hg*T ]};t6zvBVKW[%D6P[0]vzL  'Ԇ?\Ÿ -Ο •?ұֈ`Uq $BW -NW %W ]=CJZIWAUB+^.7tubL˩HMHR쨷]SozEG×]&)~tC&R\Ň=KՁ֨ b6 B7> C@wR2y_ \+zw 6r#hv<^ Xۀ{S18)^Ɯ7J%͸BtUs,6O7#w;E02~De`eE"ip^`Xr]&0=3"τ Җ_ɅV=װ5:ۏ;z3+x+Zg1ϴ?&8{u/=LY<φ/mktjAM rl>ln&("wX1F: lp6Hؑ ,\@SJx->c-m磻WUiNgh #]oorrv/ gĺh |a橲/}S[b+a_r̖tlf_L{1`N@ҽ[r.ws87mAy ނQΆ,?Nbb .^+ nw~N4cL3I I/fLPJaj;v GvzNcTSaM\Q&C+DEQ\P'"D(̀<2 `JX߄Hm #z!Uxd!R&R/5eDDL ` Xy13]Wg k\[3JNvLWP4. k6eRywgf<-ޙyˮ^U~t^rSxG kUJ&@R8cPH:#*iB] vh5Z`(3Pw:ڀmTLʔNWpep^+o`zQg]_yN:_m3vOv_S&5zEc;!cC,s*8UB@ d{3%(|.P*8s )HgBJ /$Tiu$QGDQtЖpF(DJP )Rc f hF[ ނ& 3Bs /hX 脗I仉T0\hRcW ם˴/dwl1+C 1 1c@wQiɫ-.q׻mpM"ky+ׅ|hrӡ 4#澖,LZ ;ӿd ˢ4\Ed:p({X_p#̏MjKşڳFOӧF퍇rmM`h RxCpkw>5A/٠ƒ,\=Xgh"좰+ec0[_.i6l1|~6&&VQl O=cF@X1b"iZ+Io< /q߀/y.OIKllԅ^$1fz-=?I-o7B 9w/ce_&Z.tDfEL36;˜lĻoJ[Ho^ƫ]UFT+v<{3ͶN$ii"*U%iMg ,7U 3fX (Bm45W Wt$BvwPG"DPήc383Sm\8@lJԏ|HPIhe( VhoJjBk'DI`!}or"or&ɘ|A{2_ނ*?n D*>A} {-l!T*\R&oד~uO~n'\/ g9;ݍۢo킡Z(7_ߚs;!TVd5%U5ݕUCʫ,N}rLv^s?]{qZJZ, a:5*0} \*z<d+ XQ(k 0'koa:Ooo\MWo`nv}W z`5_B{Ce/b>xT߯ޢjYUPߴjUKTe'I&x՛+m7[D㯯u{[uﮟRӟt45W!oW6,Ym7)"GCTysT@M( Iq%uku}^$\QXGA$b1ۄ pNf8mE$NjmAy[(JǶ8L9/˗du26`8aV'=DE cT FC9t 3KYu9ة`gFӷg[cCesb؀665,?:-c}w@`:"I[?*.)GFY^gk)(cx ˁbґ3I.&9t,b9,VIQzwDQ1#ZZ \HVZc+Uk)瘀h&x92J#"( -TZ H*Vk+Z -19Eû;fXJn->I"^ôG[>4Fc0GX s&pܱ"7TM#;86!:p \ʁbj67EtA% \ p :f9@|V5FC†<5RƱƏkU} ̻q_?1Ase&|dFdJFdoa60t3JKcNqS.]ИsbZ%6mP4^9|%2Wj7!D!eְYXX$"﵌FMFS$%bqep6t+[H ;}[mg5n 7RoP8$pIݼdݞlO=م7/eڅ(tlck Sܿ+d\_t*^ b2ޖdvoOog닇(t! v}o›MfŮG%_~%/~>MwTy9zfsoOXhLH?{ȑb;e@&sa/{9NlS,ɢv/ղnbU|=SbzEIT\MlگaDi")^a&LɱlE1b[b,k%-ba>1dc)@@,j IO3c}P9 M4p,{hl9^%\cƶ6QY`QMEgw^ 5$jءZ |07:r&V_3WEYV\)xJ;$A`C&ӆZOz=t~D(C0=ٱ^ Ow\=ƋfOwH\L1(_&fuG\v\\nX "Ji*MDJS"Ss;K ɔXFFDF#0qFq\eӌ]c!-XW,jcgt3w VVl0ųNW8bc$CZܬц%19g|ɩ%밷Iu c 6OuR֩ik7HhBZ ayybΆҎ]Q6/0G UQ\Z0{^9dSc 5 &lC;ScsFRY!zVXS*dg8b* duq0i<\;炈cWDt- coaִmoT&dH*djlS zGwW,OuI%eCo䬞!GP6*`D&ӈ:).nS%~xh FWxPE䓮J%S5>}jn$`mk .炇ôcWAl9/վ4/$%]}U=Ck FkjPviofz̹kjR5;A[RK_?dX&B9(9t_rffWo_1z^Y}=C,96d B1UcQ0nXrMԷwY*&Hc dsd%Է`EIfkh3Lb+JAbl9̜ ֋ҜwDӴ%9`xy}{5?_vC%7 {9F" 6JʵB㨺QjL1) TzkZa/uy 0,zN3ה mS }lɛMDr f(0f].|m=@?ˬiQS!2Ւi0ѭ凉40x;eGJ|>+'tmzjKԛl~:m|pĈHAՒ(r҇}|)d5z3ب)HV5Ji%-0{oQNy O1u?5@CqqVzOIYijNmZTu%uKQQZf_o1֯}.A:zJўizZr_݀76̹J,JjB`@ Z:_H٠(GI}c,8'޾_kۛL۲cٸex~zzܘ*=Y &=*oS{lܳHښ۴-7Y}gdzLyn3V7jn Y^෧x%wHbZrs̰:~|} =^qcTR9}*>ޜ$`KZ"Yxپ|{{6(wbd3bRd;1II CTyE}150iU?^ΥunzSpXuu^OM2~ƃw߮5 揄VLjS&Nx}U]Tש `ʤ>+Ze'\*cxZc죃fa㒩;)&^u_fN^-hv0{˛w0?oW"h.QaT<+ft3& !GKmˋ1 {lWݘլ\3wyw D}<ҥY'<ڎV񼤋?>?ݬ?z凣77amG~tq^{V^N%NoyiZj"y?Uʼn ST5(ޒ۩>uw.0k3Ry'Y|fIN4Ze[XfT,D[<bεX9SJhZ2NJ$ ˅X\NJ2cۙ@ȱ)J# yƈijN./Ny=wgA쒜fҢau*- ?8{Otbm^{sĺ>]50cP@8Z x_ZHM e Be1ܗZ(|$!S^Vu*;`eta<_)b)d,8]UԻopݯ|MlLBk) zf-j[ԖM;oLe+ .WbFd߷i[[Bߢ^%6.c Kʝjy^llN6YJ)j":.Coq潫ԛ+)e,>g].66#oE.=lm3U8[ÚVn> ZBww1$}'O'm@4)DxJUIzf]W9}WlBszRaÀd{þJ=RA̠/Bl)@-rWBoc{uT`8xH~˾l/z3շ u}]/v}\v^숃-'Zw(庴>ZKrh91BfPy?GT2ۉ2+2hBşOݞj7L`4xڽ8GlA;[`#v[;yUO;y/BwC>_8u!.\僧P~=,ɻty/wT嗴;B,}mi(ƴ xӝ]_sr\Qd#أߏ.oUjRwxB=#`ۯnJJg ePBS*e %Dܔ2򔓵|.Kƒ ̌ R%-#԰V6y1CR/I0@&29>7b#F#N'šȵZ[C@0ƛJ2%[$sX듯G4:47$3MחӦodU=|^>|((QU|[SiлAkhr1QT6=WjL- مrY grΎRlÜf2i666)gn:eZ;A(BX.qL#TXi,ҤV/Υj!;ΆL;[՟߀vPDQŃRD b儊Uʘm͎FSqM7Y߽ 6' P?Y9T97a$ȅ) `|D6(BPWRBQ|}IEFYJIej)9k Q sEgs_q!1tbw##kZx?̀LTyZF]Soy-z8UEW,6."}S S]sX,|>ohf 4è`||B&u%E<ƙg26Osau =A?šf_OLC>ԯDOMHXl)&8p@oQ9'zOs|PC9e.ګ9u NG &bN(zUZMoRmDK&>g~joh3sJR324eQ_k0u+4Voށu [ͣQS;ykh~+69dMKz2zl%"'.OOv)SEj0LSPb2LAuU>.nR1__1ޥ$X*ƿq~_ep"q }eҲzwVYx]=, \uq \}pe]n]xj[./i귮U?{֑/ 83έ@?dl@ ,6]-Ȓ+M_Αe]bm9Exg!9$EUnPfHR)qMƪλ(;WD&>4}h%kiiP6oh8gt_ ?ҎSo% 7ׂ7U~n50|0 |~o5K)oѴn)ŵv_Km(B_hoԡlLUưgVltm ی/Rɐ˵*nV\frĜUru%&͞l@% UvE5r6Kv ,FbVƯh3y/epZrAZJk[ˠe-ĩV :<Φ$Μ ġӼ`,;v>WG>VKykz$=xximTR0N{nU&Sr]d=HBy%EV*k X_فܘ` ='g4rl䡪=3ʡSVJr5r?}vrcV}q8 ʾPj?fڼ^\u4`FLsɽrhϚMFi?V}gGuT9!z% Ko |@:f~Qwy B-IAJ$Ke\bѠeWk9:uw9 H[ zOzДbiO%rq}_R.m9=ozdiW2>UZkJza;̉u Z9rI\iU{r-qSVߧ^v^Znid ɗeHeVm"9&4I/9 y ~)J[RM$ɗx>ӻPAg BZ%Z jJ(kHHF%p/\yw8k&7uIe2J6s 93.Xl&G^md{ Sl,?O5m#HNƽ+d٠2iL,!dե s 4K֊dɸƠ2mFtJl-„lHf Pr:OZT\_ퟍ`⯌qv`y*(>y4J\ʕQ:L>it)i"M2sy>0N]H`frY\cO/6y%U>_;<"!#",쌔AQ,9$zcɻĬ"-0mxA 2 pQ9W^OS8}#u?*~r^}u"G=o{J)A.r1G71M^qL[H^R=up.o/aFs~›.gw/1'pGã {K"bg}_-(CVͤqL(d4zzefyOx(Ze'G]9<{yYuf]H2y=묜 %&P>ʃ`2?4JfPE<` OO߽w7/?y9|׷|CΟhR(a+I;ϝPxgLkMM-ZO}+j>yռC; W{k/@B:ӛq~8 Hbg?)N׻<) 7"d> =UMD 'W!li{vi7GrGIObs ~L3[p(}6dA[y{8,tb-4ž̸3quCLfҡK D`~ m)f,&9*;{2S'6cWa ģ_5ܬs#Zz8 LQZ Vu`1yczGrwM_jA'ﻵ/'2ͯ5Zŝhb"3fęv3ZG_8 ,5{cFn8oy9C[j$JPjT[m~75"q%mGH $2z" I&Ȣ:T߻GM%`/J 1d20G,5tSR(T:0{F6+H ?:)Vl6J3>&o!jtlm n|&De.#zjDҘOVz;3vI9#ID[֥u.Y{I')$O1,O˥"Բrn.`Kf.NɥdÌ7ܦ=rcpCL2Eep*2`Eb"7f<,nzq!>؛?[ xՏtEkY: S1|nr~Y?JXtCߖȵJH;uZ?t?b㭎?߽~xZlàWlHwvtgwɭny|}/;-o1ҝ4_=ܸgކ']unO52#7Yz5iWxŭۦ?,ݼI?~IkasksKA@{98\0/P+ *;׃ȭ}PspޞϬ"DmBgpjWRu"r0~FoF' ZGr4"KNHƙΦbBGit@!КsoEkGW} <ڇLTe p]8)FX7[&J(L+M,=F`ą%Ԓ-YUP2m4$ ېd6~CBOB4aJ/ךPYPYPѴ9%/:Mo@<@ *h)G(矗Zm*108ˆ아(e {*dePU3*s)ʘɐ"ՓBW1[ѥhrFS%]&%U[3V#gfgTӅ8㡺+BlQub-fp voh48>=kl< 9Xda:C4|Tc|,Cv/} UB #22g^&/Hm K~¨Fɷcf@ЪRfAYӵ٬d\̽`kqCe͂]gZfLB&"2 $H4$Vkb{E^Yh%K7#ydr$/:IĂ&(2SȜYFȩNڹ9aԗca]E#V㏇jDUY#F\\FL@Z<蹗X,'LA!eޒ/L>nΘ*-U(8r3>`0Ʉ@4.  fmZlֈjOzqHܧXg5.y^ԕnzurh %jOO&>w) 蜼"C/>Շ>4Ӈ*Vc}. %k?Smҋk/,;ӴFמ I*:]LƪλHkH*Aݒ)QyңڻjxB~V^$ZGRLp0ː DˢH9hy$UˆeL%C^ xBâd[2!$gD&2ȡw^C5rLzE)SZ  y}K,O? *kډ0\9|{{F={δK`r̙fdn`j %$g[llVKm$vYd?V벶 ('&(Q=$aש\Y% L Ԗ3(762/7lC ); \U.F*K<Lqp^h(<;(Ȑ43(0PK{#CsmnNd;$Q1xΒj}鄆 c\=B~_f(nڱ؁-'?i(h0TK7˜ݠS-/Ac:4?7yUb<Ϟ 3FmU-io.gdK#WFLR#toYN: \umBقM Yˎ ]ܛw VLD 7kjhVna[֛ }HrN".8'"VY\P>%ZXMX&1ݛ''f{N=wBwJ_w@\V ia:D6͙\z ($h`=́>T88ߗbqɽʅ{UO|KQqwJ0 #Ë})~Z}~+IUH]ArJR}X]Jׅ>xfcI:+($:*j !xxmRBYIi!Dp9kg RFCLf]fZkr`[Gj׍ϞVVWA2>*,PO8W`HWq2p HH.wgj9w'S Ћ8/P1ҕR˛,PwCɵ 7H*G95YcX8Ww7|w{=m:sǡ.(8;z2[-;$$,3r.dഎ)˩^tÆK)[ĐH @j#J*qgՎseL 3j)J1,FNW|GhvwF[2%C_ъ+P[r,o>Dntu+ [^M <|4/C:$3KE3 G~r1`c"9Ҷ[DN|0Ud_qQw΍$S_ e6u7 wR1&4LfY6dը )`%aGDI)Gv6gnty4#őg˟Zh9+yDg#17OWK\?n.ގ3q}uEt:?Ol>>Y%\k͛׫U8bÁ^GS+rݩѣPU\iϊSifY>F q<1 Wp҆xt\ó|nͣ.'6j\{&dZ_.^ Gt0Y±EoT"Ol8^<)~w?)7\yx0`L¯"@>d ~Ъ8rhЊu]%25T _ގ8e::8Q|ujV0Z Aj} >ZBO֧PS)j} >ZBm_QGPTZY0_{fẠN 8-bY1yY*_'=q`z!Z [g`S@e:(+c@1S9)d(3cST;R^e*T> Ds +tb60U& ,+E3ЫU'MdYء9mJC Ay $D"`1ܹJ*A`0$ܵu@a9{Q$wAo!Z橏p($̉"ިپQɴvB4No۝! Gtٰȃp,uqm8G ,%I2oR%Ιu£*޸͙_dɸ%ozv9Ѭ} *;M go$1)yCH\ZK:Kw u,r͉n'e*;"lrä *nGI L#xK!"g;b8{#]Q[Fmѣv/n,#Z$\D -@~7J!KIjْ@;-(!Ԣ= &D%L{ciu!@<*QY[k~q_슈0"{DUlFwȂN@[֖d(8ԑp;eFSBd NQwT,eqy QFI{Ԥ\ %JSq1pq_w슇0 {A\ﳛ#&' eWޏȞOggSv\d0/dzܯk:kmHʀ_.H~0{1vg~ ~\K‡m%~(J#ɦD9*̰WX|JC+樕FqSfb}n2u^|4] luʱ"gEi8e )3C7,2yXndΜUzH4hW]iQ^-u=T~18\㺭X8Miy$aIi8)nt[X~ꚈJ?lm4{(3Z!1+BB bJ=2LTzT):ټ6K*Ljd<3R&R/5g1тFQ4`pH1FN5e)RJǖ'|Ursw+_|Z~O We/ A#i3.F EnױQ@!ħRoLF؋FHܰ!ӈ qAs")Nʠ9$: GL:_i3v 9.bУ4 wh'$MQ*Ҽa=YŃzw6Uo(OMiq֍AH)W"()>(q5!⃙: kK6LP]6[bf|ɜ or-z~)~J ~¸5.YUzfcXՇtοo}Eϋ47GwE ^f7F=(g=W)|7kXgF'W)ze97M׈߈GbK"i׈􊍣q첊 fu1R1Ƚv@fJqRRD}+e %oЫv_gKweTr9/P 7vz==?ŞeSn7FLRݦuLP=rW36#&1٤ix3sٳ=H;WN^Ěanm̵\wUpt>"SD[vF'.F\Wg /,A1RB==1?pd:Sp.'P<]#̔Ba,h69NUu3ت;4k J/Ki9š'Og^\x 0RMiC )!FfO#IF}Y'2lPj}Q%CVa^: FA`bSPa'"~p 䀚~Lَ:ɷR"Ijp!KPX[͈IUđcf 7O NvAJ4R2pDBJAYP:&i 3ˌ3]O99=z%= ښ[q;|б+aOטxm䵟L3ձ+3nc|~y@iS\ gGs8Q3! 5`"&V4#y`hRzl{ɨ$x@,m-r-bPEt%WVG*oAPI( X)hQ\1 A*!lhYβS*և{^I/ 78Hf0!L!Ig jz~BdNӋC7~dL(dw1VA3'3prQ Vh=C> DpZqZ٠[ v`'$OZ8@&s(ILi"Y"pGQMddJC-Eq XWY cCjCQ'yX9#('vT@" C#`l9]6w\?gLRJlJ(+M40e^&UI-a!)BRu&X{YpԵP/q\:YL"H"1*oea䐤Xx(5tCcib 859Tw$efm}OqyX]b4X$28!Hqp >4sL@BE0q-07 (W2L[Bª1jv lO0¸lG49= ߄ XB]EUL Z0F,9g!29`^ơplUhBMg-*. ۛuXEM%V_TK(WDhkH&r+@ErU1F9AI$C)0̭Ke1XpQ:K)C,*谗.=Ãl@~!k?~Ž.|w8w#$;]Vuo^((+9?] @* M,(U2xi2%GgCO 6Z[9.<=m/#+A r稥`#%\l.#lJa_iGjd^ wE%N~$dC!Ri>QK+7X?9Y 6ه0cz%\t8GBG! 2./$i/` x"3$ `ܗqZ2P :!Z @xv>x"|t"|W{;^o%-VB8{!~lQ-sffZ]Kh v~]GݓnmAsAWۦuĹOTg6+o?1m]n?~eiKn7{I+j@/ `Uz_J)U1Ee+?G}4*{7 ]IQ &>Ԫ4ےLJ"o3C\ʍoe7>,@'~^EzoWAo3jwfLi%6u^>VPۿ-J;⼼8UZ5Q",S^q NMs\_*1 64rLQYq<}#e^k{Zd㴠x}~Z$u_=ǨQG)L `vHr1>[өW3 `HA?B3).+w]GEfXaD.\;E^>$njDNhTz`=(·?^'MǢ4G^NzGF?_^xz=g?)e, S"N %hx4̽RLHXp"R?vVմMO)b%l^|飢kJo7^wIGv uN:zhNwpA'o{܃dyd+^լrOB3m =vH,{70i1 P"0BIL0 !CIB16 9x໘a~Hgu $]C`­}=||˚bFPD$ )j[kU~0>9ʍR2{?]F{'t7wrv#W=QىZKm0wRc+\(([ 6S^NʀղEMӳ… …~ B,mZy:~+p$Y %/[zi^yz%{/h[Իlr-gy ^*SeT/Gpcq9]V{uu)jo?6 XaQ~(8B-X,:RjKRDe:^]a< i'f]OװLUyoܯgg36k 77$c4a+:~&uvIu|v ~¢1FoSTƘcbvq#V{a˴e$"_jY"6pKa/c .!Xwpω|mV:QDvHun&k~^S[0$(l0%(XeH KKaWj&WPfRnE]!""M( .}Djib޲{Kq}drr$>ZMJڋSCϮ>)m3t Y.>?{WHq_p\;bze &N쌝~Ivb]vő輸"Y|(u'_]uBId3㙓# hTRj/4B.%aDۉnnt\ebˢ'!=Mˁ@^6'ύP'j7~ g}zlMq85y>_ dNX0$Yl}1 ?'!AH}BR4ָa 8mӧ"+()Zu6dUe.rk*8yw+t#Ьw!J/U մ^;E6(j+;QtaO8P xB% %^(syh/^RV wa݉JΎDzA(C_%lTYCp)H<ЍHɧK:Td&w~WbF'0FX0m1˥:YS'<)BB 0RCOCqэ|_ɰhgW>:e،sfZztlBBDf\2&AE7W$ŀyD͵AԆ<4KFyI, yl[A^]<`Puu3LΉȹ3FZ$EyKr6#@VqJmdw5݊.hnNqvf%ײoG(|_E %4WB|%"R:7L?RNDdy ߗAEW}Wb0?J{ʤe쁖r{ɷ{`ĿΙ<@ NIgW7|.Yp,yI|bNش])e7Ђݲ"EihԗR~4IHݫft[C\%Z?~[oww >ˏKHVΌaQ4׷KvUwOb~@sz XVԢy;ڸbx 3үmŕ >_~z9z2s?'bbm[۵@m.EY253I}ˮi{=ͺ!Ei2mm`Y&N.nV =s|9usr[g{Mvutny`H+)+3,?~Qg|4]@ܳc J I&Al󊚷1Cj{;o hxnmH_>|~2cixzzyUjb?ey׉f'qWBl_.J#\h>W)}^E'!smK21FNjJ}@`'fVoӨQH+a8BR3 4OyPz5|9"e#:G/s _ãp>E a;%Y0!'!C/73Tv9o-ƛ9I՞cۏ6sZwa_ =6|ou΍)_LVVԌvLfd]_J3­8fdKC3?`320RdTbG[y"J5EאϋD-ÛT,dvNpⵝv'Q ܖЭS  :1gLrdRq 2iqZn6,)V~Non|9o+"CamjLǀ %gX5ߚzFTP1Fk%EΒ$C }TOdFǓIJhEHKRJN XIqe-5jG^ս4)` b\>y,Fg29rw“ &^Vcg"Բ7ybr=i3";R&&'~VD\i]|ԶSݼރmԨ:]6ٽ-/_{H/IO[jK֛ɝK6MP}og\|M+f1z7yZ|1{yl ȆF6KiOv~En~xxk#wCi Mouuw*=$ .te13Gw;oEsu}1'_>~S+>l<\pä=26qEXlwW )m4ot#ۃyTNj2;~/!{4hAJP3m%Y`΃d ;L)!ywt18)F fH-SI* P:0 ) 2#)L8IVI24 ;L*[A?+^~rL2bMR#a++j5v+jm朒[fNp@x@ *:a/*!U>'1s-+Ir@& >($F̣YU-&ǘ%`c&GLO4DYzJ-9Wg2R-c5v[zJ5[XM2--<-͹PĬ"Cx_Vcg=lA}<6vXjqEԕ-,`wIr,2d4%K͎CR(XpMd@P֍vjK e" ؙZt܈d I4D nj7!~<ѣd̥SuVCl`KY ፗC䑘@lڄlIK 0g9vvTa58f`*wmSNsMsܕ7d?>S0܊ُӏ\Wk,32F IizFZ# #kUV!HIēɝShҟ+/Z#@Y¡tVJǢH9Q56H gKβFT \5 RX`difmgrם3KHފmd \ePI౹M?ߴ/SJ<0*ҏq q,]PM<S'}ЗG݆'#8f֊s-p&xʓsoCTA+Bd387>""Y^` !$ABRMHIg!22 DdRlAUBcҤL'sN4Tҽd<+" ^r+Ucga ߼>\9yxޯA;N4b5na;nj~&U>7<{D$D1g\ %oAeYf2Rj0r|H$9]}cy4!wW#θ0mE. J$^,QcoGoҤm٪b7˦A#fho侴̾Z2+bL?*6C`y-(n[N@&EEq'tv6&SVzzu&Es]kKR- + ])6?͇$&[].;+@:CG`":.Jʎ͇z#֭)#9r#JfKUێe*czUqwI8J,kzozV_x aD94*\ի4l׎2\b*qƣ `μ-*-e)G)ݨX6q*mIUVӮtYH+2fG@@FKa>]io#G+}1ƥ+ wvAZn)7(ɤTʀUŪF(+Ib)$y*梟C~Ip6q2~&ksǼ0+e7_-޼hcn#YbZs5Q^j[(@j梶Ņ663}V%"U]U"՟jf2sn"Qf"ÉAPZ8(o (KeB)3;SGБuǾvV}{sQR WRKRdkB.H #'АD q_,rfbV_P!goy] K@2\2,ֹjqvBk8`=8`C&'NIO=qȊ45f9rLO*$AѤ[ReSLrK4 -b]ԑh\DKH΢0'T hy(Ryfڧ3Y$ Y–o~X !GkւleUZp"$#$灓!Wb>0JF!<1 "|Ⱇfb3 ig":DHF /Ȉb yJAi&mQ=$YgtѨZ(Ǥ", fJ¥,D'Rȉ5kNG=kgfڙj>R/ D٠i4*@L6SФi Ҕs^Ӽ:m)0%3 !IhFƠeiZŠR7,NG{w ȡ(!䉂M(7{,_-2I?(CqZFN e9ؐhSL5dHBJ6@4Gլ9zWէzԸڙ8T\+H_'!l>J }Kyu&& mBuPh;:M?>w?T}zGzt\UXAS4ܼ}?6?I]/v_w3x?Yԧ5M}{7OnrQ6r~~^T[0k클%G{+3!-&RCZ3& !u;LPՉEV NBmk,DÄt=RZEY:2V=VlQwd  Gmpz"Hŕk]Xf]yS mzb: Xacoa_&oS71UZYoPʾJ<]vkBj'N:f:e7FJq09.*E 0)yH!]57`"b@ep[JzKrs ?gR҂[Q]@a(9wsnnr+d wtUqҜ^Q58!`AXKHE̙ځ3ubbU]:2]:eP`t|DFL!IH<䃩nVOGlhf_ߺߗ,ٰ-s~`OL5c&xh C/gxrv57ǝlژ'裯%($gՒ'Z gum,u ޒSVLIծܬ <= -gWت8W}3^Ы8kIçHm+/-wM~T;M3`T%p7|^χkT1@zԢ\}gYt[bv/5rvZߣjEv<W?3iū2ãi̓ϗ lT5 +.^1PIci:RQyq}ˋo_|fnwY}ڗt2s7IuBn>.S3o~y *N7*C^0 =8UmL.-}.$]q@d`ڷeʙ1Ƙ?!+/6^W o{oX{ܼh*ST#^>0t(C=hlR`6ss 9hlѹ\/p#6CZ_bv,f^饅`n&{GW%(huyE<HrwIHERlimG\%t|;jl|^U/}ۃ^lOez ՄK/h:U/z`t3Ru"08|Gص* $!}(an)`x+l\ f컹͕PUn5RQ5h4pOs9w+3QnazC=]p4y_MVEz+;-XŊ\e] ?%&'7Y*/*sr#A g(pVsu6^~ghgpÝe<,ac;(C_},Ϳ_} +V~D's/OHr? %^@ˌ+ʢvf`9! " DKg ~VT lo+~2ZLBmsz{Ƌn/5`.J+~=Ĺk;3̟\@^zʳUWfF`đ $gߒ.D8b<>ϵdqdwC,'s˸8jMNC ^*޾"?ǔ=@ASY4>9'1J}VbB}E_dwPɝE"s23VB_?+~.C73Iv_10O)4h%xVik)xpP4q C]-|u+xHa7މ'mxHEJWej2gG{'mtN=i2hRmN`)wZG㉔*&z;Ӏn1D/"rj#PJG1qv(C(e C﹇zotl'jN;w\22'C_Ҋ3_ot_.p!wG>_Lzky-(k\|$!Ө_D-tIRgXjd UQ\LhXEHNd7$8]#^>8E$&JF`uνJJn]ލ9=ѭR~Cun27s o&jK񺌹Lnv0XMT5D%Ć.Po߼7Wo_y{T߾U8U+ fo~ދ<_bh8|hy 2.[}>q4Z)c?\j~߾\^Un\jda_AQ/Nj}Cꛇ\)"+Y}hCfŅ} c[۵4'u(;J[)[ QL1 S=+SU+s\p\ߦޛ\Myr̛V឵w>@!(O)h#~ru{(~_orG{~ڔ*>XDh}CYQ&gV+J9:)碽؊{W7G|r $iW"MZNxg^ } R2Z^QjBHM& )*$h1蔸U10 MR}#colFlް7 yXȏXX(?:[gY/y竼-lTO~}CaGɇѰ} PJ'rHBDbrX|Ի %4uKM4K!;{\κCdRyɝAQTBߎh(7|L+3bfvqXr1ƂK:vEm3j#j-صhrDP# E 5A QߍR!AuC4,w6 8zhG/ڳkB$^Hh8ƜI Iѩڞ7v6a焦0 Ǯ({FDyD#"xX;hcQIe:Qs&gЗ:RÝg~F!2̐Qp;*`Aq2θQF E=zҔ|\gD썝͈"ve(!:{]qQꈋG\\'$%A+˃Q EP+뿋:r\Ҩ->%JR:Saoұ+POaVi,^Vkd ~I:Jvw~Dُ?K}T<&=>QңDB3 ')q~g2]#By>7A LQX!c Q- 0m#2f!෸w>#<\~^V'Fg.$ӉRH v:O a1C kk]~j8j}oZF+eoZlW!C(۳ y`'{ ^(ʉ=>s߷ƷuTwujtNMc}x~)\-+?ǣ%?7mwtٹ- ڋD̗6}<9?u졚_>=Zxw㳗a~|ϿmؕWISKa:Shڰ=oÎ{׆}wjO%%TP2IWc3j^ɥXH i FѴP@Zg!桧i7=Z/_ |1b Yb.<\ {:7o )N&3N[X&Ksx^K xcRM$ԨSO#K; $JznR y!W^Fլ;)IB_Z#RtZK s/9 0썜}obO\.lgO˳g,k]CAεݼM+_V\;! Iܗ0{5{G_/yzֳI<^5{}Zc{Wת$HraH+JJf-AD]oF } Et}K;yˁ> w%ϩ gS6%\ІͤDC7Xf `66MKmZ)ڔZTfFo=`>f=Bmּ~[6W(U!C ť J"}tYS'L8o7% 1QK^SQ) `uJba*CH1v<8{č!ŗ] A_1Ev ~WQt~Z]|!_o|EB#(^\VmcSB0:4*2as6ŔX|w9Rg-֐;7h> >K*wE@5m!gu| VTձ \BF=λ`lق7ґKN4>: tQBQ9l6qnig z j?-"DPb˨ 'bÄ3CPg.z̍Bj~'~brHP(c6U2-#;r#h=C)hдlP~x{Ita=,VRENRBReo;aV v`1ثagmd1 !n|0;t1f% K#T-hT`1#ͷ#b4WAͦ"w5>y%As͓f|PD|rS+1{úR2<3 0vbc6 u<[=?U)┋F`$'}_9NH8=Qnyhdlȷ)/)%?8ňy"4ӔmZ!%eMI{))^ nq"fT%ՀzJo8 1Wmc``._'0S˄P!8fSl0&S1(XBo-g gmgƩ+7ޟ}]?n8}_1͖Gr26*V!*4]:Dij#㸅 [TxJUtK1\Ie}`qK {KhnN78,Y#9XXJqEQ^hj(^Jad&EDCص9h`Ӿ|CCE6HXa# [(=l-POfNbv7N[0x헔wbLg"QRB.5S3n;,1(^Z-Jq#<ةQxz@~Oakt쮍mhkݰt|@\_ڟ_}8V$C ~]# S&qIIx|@*ۿb=G%ǂ\JAlG{4}(}h֭ÿ.52Kbi5:;Si[ !SA[}歑ĹE'Х_ާw|+uh V:կgy?fϗ'9=|W; c?0c WFtSk)>.<CZobRth vQnj:[PP ,u_0e2hS=O7r +x: xYПiMV>w^o?rt~pF+sxuMJnπ]_s_m?uY;._'lַ=?3jCk`|:SNNy)ϡ-I)B5+8jɅ[O8 ys@>JYz|'y[WS-n❅eb |6Zٮ b#pn܎sWZ0_MO:fcpvZ5R޼0o@aY~*9iOKZKunx^bM9pKHcjͷ\PE5p1T̞H]\XYAן??)ry^iiׯʾ7K_.'>nZ\ a-J/%eEJ(rr@8 od~h(BQ-Xls8׮6׆#]=;m$#\O7aZi)HaNiSUP' E06{Jц(xMM\3Qb)UAUϭw3`=~}Șui9xv:7\[+׀ќu*żd*Bwg$J1khp|wpXUZ:r#h jfl$C0'DVlhs9aMV7 t'cVOY4q͒-[wsf>6iQKv1QwfO AYAG%`ǖ*Yn5DAeKl:@Gf^8@mn}Th)SU0ᘌ[$;5ŗX Z&J# I4 E&rZȼ2j0S7RLh8@=ŋ먔U:7 V^rf,:7(*@RwF-ܶ)ie%i8dxz2 v?EVb:E&z0 — otDmZӢD(ʀvo1!e=h0 m .FT@Q) A1-< k^πB>rg&3`H-U-vm j|%G @A&e,)Zؕ7Fu CAaM݊(P>6pFV ʉx 0]n1A{YDP5j%kTf)[TЫ, ^PE%Fc=Ʌ! Z@,Ta) yњ,3 b 6mmux;H_!Hٛl ΡݍB<8AhopaңFX*0 mDX%AD{k|TpPZ{BXg7vP96I!V5K} 1>5 ٱaVM٤P4!*.Xj褀)b$diPΛBU~ZMoW<ޒ,^B[t^I;ݮm"EMT(NDG U@ *(a٫W;/j:+ W,UhRjj~6Y(EEW5b5 {$fUmF̀Ur_ Z#`1Q̀V /fN"U 돪e/U4Z zo80-ԺtU+ xY2ϝ/%dP.uz}t>0cڙKRd. Yg mkQyDmQ||&sVmvQi\򤌶%M/NsT2XfNdyD}Kht.Wټ8F}VOMU,t;L z`Att|Vo;|KWaw_4;dx&}?9&}3ILf7ͤo&}3ILf7ͤo&}3ILf7ͤo&}3ILf7ͤo&}3ILf7ͤo&}3ILf7?_ҷ_E}GyPqGT2]-f5bL6U+ *y/)\#6jĥ/I>(F *U?J7鞕}=+G26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,c26,cOjb0׋!Wڠ<\(&WP#aN>\-v3zt&yz=:lUډ ~`+$%Ӫo`m6mʆ5ދZ{/g@>n~\c7˖q7vn)"wf/$JkEkU8,MLQjz0Ʒ.WןHD\qq *7%Xkē'.2>q-|mG >8I0j0N'(\.avMu٨v6N RWO/ђ63WvEjli*ZZƪm3pHhbu_6ZgY:%Hޥ -/NG5 yI =4O~EJ^]3AӎG_%-FnoMO%C\/ rwҸǿ??K?e W]._[wVxpi*񂜞B(Z@z80磳gWPǏ/*;"_YFqv,^`<֮ Fj:N 5Js U#Fhb@FZ/Pf~ ʝjռY= W0K7Un_YkSWdeI2\vL6YꀿGʕnynry rJ0F _fԌ<~pj;6PچZh;P}_4nTt>In7?'4PPyQ]"̻tR~J+f7jJIsLکVsɫ!'y@JV*=!= 0X 7̕b_pYY㳌7;>z(E~Vgg{b/=•pG_keܣp{`f_YSWd>p傑I ?i<OJٜ;H{qvPwNT{y_;JX7X+Ze_ill*!|ih->Dssnb3 ޛ sۛ,Z?ܱ) 06uϚ0IO/lS<u>VkZ\GBȚ'M׮Kgӟ>Nܧh_4%zA֦*Ih=u$Ѳ֐Yl6oYmv^O>t{;]-"zfáw,O!ڶ<iێjAk;Ǣ׫חv3!ywI_gLnߥOpZɺh̬b{,'qtE]Bí۩ǼVW W`]"c\La'2F]FJETLV*9U^-O.NO-,?_.ģK /cW X$ `=_FI7JVr*]r$MȺ$'d(!̵'N[|ssl9=վ(NtK[?u/˘ k!| ՗^K_wn u~d*V?-=zOWy<rjbdt^˩I YNuӥ-H2])<kT<9)nr[ۙU]jJ1Il5P .)8|K/gA찜julDh1 QXH׸$/7Sst8Y~C^T&Kkm_ozi%7S 9|.z =lz]$:>>Z~T~ikt~t\_ՇR:oObV~@8A k(yvGiuhy:Ihn#vX>x ήoƘ~لSZh 4[7CN y׫Kj[iݴɭRٻ6W ]]@>\]${xguMX=קzHJ!GTK1`[jj]O?U]]?Og8?y^W6nWۣg"_a?ޜ.kV?ʀ7ZŏiU9qD`(.o@n\j#e;~}O( ;J:xƃKtT_TdjjTZ9s1Qz|Ra덗՟rډ7 ޫQ6;~n SH2BtAm 1D?m4YO qfsNF7ڱ]уY=f{geLvoqduuCB?衈пQ7e7WռYmUGU[=iN9s> I^VX jjO$k֒$0Z*TJ5Oܹiw[h"1t}.H; J)ʳy/Μ'IZܛ72md$ӱnb+lcn4:nWߣl\p2PtTfcξVIXuԠdCM:4T)&pRRZ&> (X;O({u91&.槕}xz t9 8q]VKoa]\b1n^pQvKዺ!53]n_0#itFWC +:PQ5MBଫ !(/z),q2&ƻv6eڷ~>^vr ?A0x7~b(~;c^(cS'o6-ogWu,;%/::Px5xNwGGB|7<ƫj蹽z6&2ONIB, H)ES{Au~eWȦH*BJt>'(J h"D X;Œ_aa[f&FQґ;=%\nWtK~Љg$A#A?q>bSnq^TGedFKE$vVd&=W(r:OT%@Ӯ}44 R.Y`+BRRVbb@I8(;5)TD`Z5&,H0a[vu7և:ɛ'OH<ᴧ1@9yZl_b $$&)c_P>ANeugmlȅ@ejwuO@/89zI'D/$e1AH0V)z֡R:Ⓧ3#6&gVEX9BRڷ@J%/; B7yc@||?5pTbօم_χ#=U/7n#$[JUxU/+]_^Ֆ!ݥDh*]a5o, Gytǣ篣u^,s,@n=[Njz|;u[#χ|Gl1{=2o7CǞ<1^6-ǒP]mm_7g>\F/NkK"wdVE&2߫Nep:.QwٓIΧV0e/ b붐IirY,y&aOhr[}J;Lx/p@L9%! [}&)IAm؀U$gavIZ m2qCP\̪X?+?EĘ)VMZ0j3uju'q1-͋qS(@lYB4.@q:ava _UcST1R@" 2_Ʉ"6EJ჊gQcJqV=))ֆY/6&R =92R&D9Rț\k KL؎4f3X,XH>)͹Tőe:_ՀΟ?-qF B$TSDvs(RH8W*70MJMWΡ% t` +c):3yIr 6" dzsI4aRsU!o^ЄI4"*a$hLv(QDA3J$ njӦEHaBvBJe;Vq#?֖֨Xy:ZAeb)xQMbʪ1QYPaR}gz&֐1'I\߲9_\璄VJ,N`eJK TynoO 8޾0rzO EK[ 6Akyp s 9oC*%vWdͬ6>ޥ6;  RAg7 cd#ī}-/~0%$ A%m䳷^:c4Bf$ȩ(T}I&?.t'GgJ=N~c EN>/$7R >J3RM-2_լ~wgC((ZΪR:k/Yi؆t~[v%Ǿ./+ aU%gyj9uz cŸ+cn/>^~wr:k|g_RZZ1WܓjwK`iԟf{+ u#ċ^zMr*{n;-n͍C8xwKo3E|Hcś%QcnZ+_^}򐠭ɽZUrn-f> 3t|w!EcE!F<:HzJn} g=$- /IؼMon(ǚN݀|+r{+_ǟAϮ8#n^l}7U?Vφ:Cl_?YW]N$H2OW|u%w%u.fWg(}`NvwG7nIO v{Oh ZaYa<&3޵JinHQsydϝ-*"1pf,dwMihi*]w)Q)1z RMDA@)Bf'iR_Xrm5Hˈ'b0>i'l6)UOelӇQ[GF'p08#zm@X~婔h67^{Ŷz(՚_Ǔ/SA!%4?3nu=v.)u.y݅"ѣ磟0>= Nj,uxjHqm`tA8mG*I+|vKcm<նc`C9&&]YB8!Y@^czˆ1ot,W.L@DM;$$L&/l"yE͓MX|(%EdLg :ϯ, RM))6%ŜPJRR6,k=Y1cM;'d@KJŜ2% o+m#I/>  gmt|1<%)öU)(v-ՕGdx.ôCQ{0rHRIpǚԍXW(;ZH"{Ҿ+Ar`-1<0((A1#"S&#G~B.= q'.Vg}t5s*T4@SxE Ti -.|y0:QF]ӋbW#VUކloˇj{W[ Rh~7!+7t}-RqXt-< X]Znu '`+!ΩT7Ҩ>oҫpxUoМ57t}M|>Vim(&VQ.($<3,c VZXp4Ft#7WOärS1ޯ߄ibӤYކ"BA+\Z6?ÚCyQoC#Ws7u}7>{ I5oq;pl[S.ɽ5 dJ/ \7+ۡU^BPٲ:mV b|\;n4;>0!\qkF)\JJ-a6> `w17Vx-2{kvkov'ڕ0mFZτDT>Қs6fg[FH(򥖜N)bxЄA׷z~{(dXaVX62ZQ#?w G5,R=krF) 0lRJKJHK ?$%9%;L B[ofb-Ҽ׮{WkIA2`LKmciQFj$,9H vB9=BKvB$eR8$B$XEu\q"Br8'f&<S~"M9!O 0di*jWݞ[>[7j󥙌ogv Knߤ?|l#c01r/#AWdd)%3& D#ܢVl4*kc&Pf `lrD`p'}NAkl8a?M$_{g_u>`ST.{&G-j*3^V}L"2JPث@/ p]8-yX&~r& 唓)WR{%H!Z 7p #C:hK8TXi"JMXTGCJ91;cfƽSMf0yAc5qvOS{o3`T#*n?l8`хo`{j|U cۀa ` eU`%8NȤ o3fœm|H-rVs[`3C4| + >o!%aU5qct|Gۭiu:tju Nl'# M?v7F^+k)L@yM>jiJĨ6x0bV;|xLrGm]Xs.5] Bh L& ֘gld̑9r2GN9#盄z9r29#''sg̑9r2: +9r29#'ê̑9r2GNT@*1Cgđ~ҙ8r@\عp$iO#'II3Gȑ#f)9r2GN9#'sd̑9r2GNɎ}-Kp̑M̑9r2GN9#'sdūDP2GN#'sd̑9r2GN9#'s䜇 Óuf6qkY#j 0dŀJ)іN70DŽWz/xl~cQN"׆H4D+CYB{CU*h0V N"}L>Pb[Vh&?Kt K< bu!ix ^bɂIa@r _pt*>X Q")9IfQ* J3KЈ`@ ZӠ\3'",J80Da. Bş P")HgasYVoJf߼Sw9GK.ÂUyv@$PLzɵ<a\?W w!a8X>2ƅ|] ;%Jw&(b4l]dW,'%EcG9}:v:o~OoWo~z˫ohqpQ}$8 ߗ??hVPެhEKf K*7{_}(ZLAqR J??O;#ׇwAv=fbլCB1=_+ 6mnJK%sܭb? Sڅj`lc>h#m%iqC^wn' g# Hmq/2 Ǹ \I-NaٔWV x_c0"Y1*#!a:%"Leal:x28p6F<ژX5= {{jt8>|'-kNA<`Uy"tgI_ \RS UqTߪs ^ׇA@a\?awpQUq8@#n K9 Q Ƒnœ8V|tާ(S0H#ԫLx>uӈ|rCʫNC!a(0/ybe+ZIEn(h/sZCۊNoAwMkS<ٲ '׶bbWR6z&7J,ɸӭ*/~vP u#T4 yHe4^D?QgGT @*9u N~b3?~Z3WS778[xw?ڃzP{C#c`\bҥV`>A|)8QY'r!xgFΏ `J%M\Y`~#JQ-D!tv|dOdOdO< `Z`C %шO'v9i- {櫓ѣFE%PU JqTQ( "CZ)"V_"&v, p7V#-sfNr}svDQCudIVx7/;<u{>OwS ןwmqd,   qXR˒Όl_\tzf4O-P_u@u, UcYD@YAزf:OU^O\5{=jzäENł-YQE^]'@JC,  rXr-|CS}RjwDMc/D1Q; aa?|豓MT0E71!W˦2Wvod\b =[x# ZkpBAB9zbV]gVM@T`. I%$֢B)WejQLr>iי^v΁9?@U.HNEH]~ݿ[ Fz:}]^ǡMvo2Jf#ev*|˫Lyjt9eRJ~.c#BE7hLC8.da=zxy=B)yv?͕@9ax|3xM6oܭe8!:q^+qs=7\>ýQZ5zhaԫ<stC" r*n;/l d& b1!VN,̶j?:̘HLP+lIΆm: -VA\BKyqi ɑt\6m Wf!%]4F0*Cu|5FdqG\v\U}ZJٴGT)±/5$]"mD,* Mg?2*ݰf q';9W3 e:[?!]|9;]^h !iT`q2!BɅZ}ɩETH㲊: C$SQCsRk"ioס*Pcp5swbGtq^naXԶQΨ=3؝}uM`EmE6 M,,{^9e5Pq$\0U50' CfPl 4kN#~9S@}tʩVi8D"댈nFwiWm'0foZ'TҖ)Ai&4VHA|9GlRƾk[4J|GLkÙTR&dNyÔx}뺋茈ݦs@,: ..KS%~wi9gB@&JPm]W䜸Wvhg1j %&rIFg`f)*&#de7,z]ΐ(Taק%y*iF@~x݅J?/x=,m2b VW됰ȬNDwOIj <\NEޕFd㺖}Jz :zG  ]]%ۓ_|uWaN x9wxxx !rw_.VOb6_ 6Z gQǷ'nb>HH?&RA"479>[=Dkn'+HZī-cO7AزJa>^]o\Ԃz͍۬U?&ܩ )ܤ|㋨sPӫ;(aDdsU߾*ۘGPƿF;-]\\.$9OW|R6x)çӫ3y>'vwGwnIwU 6ԪtPkrɘrn$6R\ɦ!Es7lEתV@4jX|y/˖rbCu:@i/XE+`F]LlTjUOM*QӧޕY 7q>40wÒ>]N|D7xc+pQj5OK<*Y9ӨsK237?:BkRJ!!ee)FEx~Sfv0e 5e֝:<}g8ɜ?bowYeH:8R!ԱUyB*BK+&xo Qq..Z&Gl Ĕ@kƠYL7 o]te#:!tֶ/Uf>&z[@acoIh.>-XGs}`Wk{aAmQtȞXK \@b äs ":zX-t v fvްa*F-@:)ªDސ h f:Ǻ5Qpv(0y:gU".&J)s"#kd-+gr?8ǤđG{й+0Aj*Td4 4hSeN,yC6Q^+NT0"+zTQ)=[[ۧ-D^dvScsZض=łof TӶ }Œ|4@FkRy3k264Xu iȐa5:JlKGPVDl` 6qƂlr/cm|3yY-*HXM<T;t oxڐ??XU EBC0M`=>G2b:R j(drB}0>$(YnBKXҶ-Jk_[C(d/[Smv21|_|P(Pv΁ؿ LRR+H"J\UGVgqMLC)"+1y6۷9(dS.~KCa c UF/bdfmI'ҾPM'MF%V[)ؽ4YmW+_lcS1oQbT6Fl)Z#(0bjV,-f-8,Й=+c[mofмBہo/ n/7/^Q MpxD{0>ՓCblޛ f_}ȃj e:H4 S׵HY4Xz9˳g~o2 nbx$rm=F|K-Q$kk-YԻoz!R*\_֪&G-?Bq@Ŕ+knݛݦs@{4JP_$MWUGhE@)>s7Y/k&7jܸ̽3-ދݫz>pyywQt|w HH65[3`Z8`&HOrW N-~'D"Gͥ?w!B06R j]I:Z+*[ꌩL'+Ot)rʡs;M; QG1~wzխ߇6=_+kU;rV%SEI4\Fw-m[mqKynnnM,- ICd6YA Gg9sfΐ"RΗ0z^u՛b1Г5݃ކ3Ů!:`љWҮlD&߈,Q~#7"Rw²3tRJhm;]% tutE5!Jvp&NU:]%UBIOWHWLKh*V3tp)b]shNW %G=] ] T*VF+tRvJ(yvut%2Lg '@͎l:IJ@%2-٠roo Tj/̉ӽȾe2u z4̪=v#Ym&v1zFVFH0֍0uG~_Mc?by}/0 nhȋ~Em 'C$< ɭԩʠ?./ZJJmiHUҀvhoe?_ԛkd׳QY}Jk;du0Z\o_޽ξ#dag_ gAb5; _Η[}kv֮ZSVV;[NVR=opPwUK錫p5]qZ2wUSּ;dz\)BW0s!mƄО8kیkg말(iy}nZKZ+֪&9aCNh:MUQ9?spF0 9(%dzۭLҫv/ 4(Ju 3vyLq1IE2 ot6UZ5ކS-b04Mh+P8:?^K k}(}bov SY,DwV\ݙǺZR~_o`hghWeLz^xURC v`.;CW z]%ZĤ+BfCts2zJ()銂OX c;CW .']O.=%+FPCt"UB[OW =] ]q+iw;UBZΩF#vigyW .Q]VUBC+ɸx K:CW 3?n? %F=] ]) !u dmRtutZ{Nˎ !C\"&d}ڭ葏,-9L]ep}w,s.`9g㸇)Ѯ̩=VGQ;kHE>ג)E 0ٕ|hZ{eVjJ`RGSJmΌ6>&zGSS.HM8,]eW+nqA*{}cVc`bz-H2.x`9MEr nwǎJhe"9"=mj/'\'hC牙k+]ឮ;X q*ֺ3tp)&]Jz:@Jф]`!UB+i Pxt$t\]%;tp ]%TJAz:@bI&:DW &+oy]%J*{:@@VKt%ǝWtZNW C+!S<9U+UW 2,NW % &Sp;DW sxO*zJ(JIN\ C&D/h٧݊N;D)2"tB[P> iZ|Oj[ ^hռ?uI ~duAI]bPŹOAUp7Z(yavtϼԤq%R_r@m&>,,F#}T,Ι:g1b \e%}H:}$_JZQhuˋMmHW^,㯗r DzӷwEXKԽԗn6ͫ)Uߛ H67{ *%X.P!r?zN_Auطn{o]qQ>ȵ(( Pf !!Մs,@SiW Gg@/+.E E.c)Bç;?(҅*N bu+¥&-)qZFe)A@kU0k}f8I)ҸTG^WtRsPRQq ̦ԟ41]-[\Y~t1t}yeYa^f_~]҅t%욐\S |uȽXJO,o~4[3Hݢ?xzraXH4 tV"?BڜU`_ŕQe D%XN1FbdCHBx\FsT :\xQ@HrmϡERu;"E RrHd 3âҌ;$,"0 (O wZ`+kB➄Itx>ǽ^1]_EKFa2z#8MVEOJ'ImEPPYu[O)] #a+L vB ̃FyPExl+ ^]%!5kHfgYcAs͜0+ Fp. BɟDR@gE K<˫!8fDeR C6+_k+ $= 3}@$-&Ƃ4; l[b`h㴖d %c(΍ _ C^\jX)אʥޛN]GUϯ >id H+Dli D"28GFE8A1<{˟J`hVtb oInhu/CF9NZ&rI!GEvnJ"{ z_ã+%%JAK7H/Gb~]\q SH_R0!ҦtV>1Y.S\r1?/Zwnѯ[93G8V}Ke߮-M~X$":5UЦ$Xtuey dZr`4O'eGuFmjcmuMnj+n1 KIsP靈rzQt>??VJ([ Љ_*aIu;sSP/W~:}_o}cL񫿿:~Ku`#`??xHܨ_GղeT^Tߥ^d7{|lWȭҀ(rn /^M 7;5Evv9kBTmZCc:qeҰG}mb^:~əob{NemZ_Huw濾оPvN N24[ݪkbU6M6u@ :7{:~& 0)-A7/X$e)b< RVb"ᡖLD$h>.֥QXOf9K[‡eu -̳2]ddXra9LHݡbMIqTHi>H2рQ9 q$R9a?_3fGfRC֤@vLx$Bڝ܎{#% m'*9X_Q1>?Ҳƈ0!XFD08M(rn!:2," &rX+?(Hs _a<k >VqG¼8z֥\2s8eBQ_+} N@1~^};*Ҷ>fbgsMPü]gr1qE.f^gKd\rETP`ȑ!#%i{Z^VQZ*sX KsNL\Pb6(N[u/e&#QHYu2Lw>ND佖豉hjD4^6gXs|vQnm r;E䓕XRADb}Rfri, K{; \WMrNR?b0࿋TP:eS*]uw|H>x/rFaKr`2OhaЀ$ ZhXzoA:X@.(qCvY0 ШJ,VJ\Æ 5'3pKSӓ{ڳlܲ rV;@ ! 3EJ`82RJG 1j](G@(/)Gm7!cLjs7364ƅi}5̅G+9EnTq(|\l@noQ13vz2A!Kix06;^靍c)splA7`^DM\ڤBI$R:"4Ħv:k!fk<ئ㾬fm޳vn}Dwmq.d@ IMbY^ =Ş]јFv?XgZ&뫯ȺȢ6 %hqD{d[.!zR%cU7}SdHg".hT4;* *oo Ф#fk 9vԎ`**y$m@Ƣ8M!T79aLi iCǷ(RH+8< d={:a`K5[P>*Tt9soQy O!u A#ǻ}rj2TmKJwPM+%g; ~g~TֻbBg7=C#CO ,R۳;O ǸRo ŲHPcFC ,Yn%Mi,$(e 5m`|HNt'VmIvXyL+oRuqۧ=`ZW,)rne;+޿Ī恭%MjNtaxc ?Xׅ~\T9Le,&g/ChgЋEQEn!w~!a(T{nbDk&#._ި~T L/jl{gl?1PW[@7{.ntNgS7ZotO{6o[/= S[vx|r`Y\7Tz~|r2 Z˯\kr|baY㧽j1zrHnjHM vl|oEýp7AbtRXX/nj@nݨq5MZ-'sPg9!}kS׽_Yٿ( 0oS$NlߎOQ$EˎllT/W|͒?񒇋{_,#/L tBo>Q՝[ { _Xe*jmZa\䚼.GllL )}mؾa5ijN [.Ǽͭ,g=Hv *֋|QY1hJbvV5ֿ&mP+2q罘.[g;;?^PA?v `8d QQ)">}*Ylh_wfzmy8+'R>xǪw\ឣj>oCgGVp?L}ވu>C&>%^1D Ω0# أA/Y>pG޹z+Y[4AފdVת7j5Tf1JFRC1*]EuT)H5LJl8@*e5"J:#0Y3\Zut٠eY3'LJn&6nu+XIrZLi*֫n0}fħڽ6ˡ72:x[!J@6T-h&O)sNSB%q߈1am,Ei ɵk+ZE켪C8>|HAfMM-OxՂ|]0DKcZ9c(N!}SȦ N1ԕZlA[LkHJ4:ڵ#  OKdBI-)֫qa gߎҬccΈTHk5R`_o%7@.cM*JLjֱұӒڱB^>N|:Nj;lہo]dÿi3 ly=K\ͼ'Wm<)V3/f^zn+V70l|^;^[D>0D`X8 bS*=p̑o fLhRˆN+K1VI;[jCU]kTdy-_ZT J g b,+EΘ6UwobvgrO(׻2&B.Z޲kF]qGi">t䫯:436ΜҨ_ґNQ%5:$bB& 5( -[y10_ok+[JM$WB Fc9`jL"dc#O8uz;D;<\m<̾S}Wa х) @ 1<x vI!ED%-(挃8;q猃)'RJ٤DXIY|(\K#R%e-cOk y%P`*?)%a..!fb_H>qp+<;"v(x2أ\?P3ϊ8eA2'k8PA+锵j0bEvՎǏ}(X8cU)ЂIJ^S2|aSۥYUԀND3]I{9hS0W.ű 8٨Wj*Bmw%u\irP!OF8`E $K1*8*pK}_8뿒eŤFLq IU۝hV{e2A*mls3F'~+f }OW g8mLT)bSOF1m@e1U(? 3pt&mMlMNvnۦ#ަO3M4V0j扄ʪJBuPAle FŢҳ.N΃evE-m:+v .x[ݮ, Ul*#O0|赫 1O=ؐOvt[!jGFG`~MЋfXS8p d/n ƏD`7 6ۗ#y:4eX*iMDӓ٤>;R=B34$n̜,=٤0 1o`LȾ.F\~[rOT|6IV/z뵼Ńl>7tVwXB\ ݟ/{7 a=fF{Wh\kJ?w%8'{߂OMbnjߺrw6{PYܜ^ј;SOʭ%zHW[S^uv-&ꉤ[fN飦tToDX2kH0˘>ϡ|=>NgJNXˡ(9ZyNV\☷jqz*1Nd lf%*Tk,&FR3lvZM Dv9]dö ݭ3.l֟#ڣON-d=*Huv *Ui Q'CdڤUBx䕩oWzT(HAndQIs-A9I(UM,;Fr#ap%ˇ"@C6"8 nmqY?ڞݙWdcնl5G xl,&Y;p7 u4G~^7)c/+Fv;Y.P,}sru}Eۤ0o}CZQE%¹' "S9Bk9ˎ"3ËLEa&}DpVۤp4R$:6k0ɷ㔴DwYc%Z42 3eF1 2ٺtgr8r1\q}&0\+ϝekoy/O/Gy/ƾ7S0~,1Sx{ETh IȇmzI jَ>BS8T8 F'>{w*o.9qq5).jJ1Ρ905:);=}#=}OX,wWykek N~aysoڇOAr|0__} ;c@AZBQ2"+h0izS`C%#D9r >Ž'I⧑ױgagE!!,yEfr۞\FȼKЄ;[{|6ZWgsn>rҹQ ?2ZUzpB=zdZ>۹-@xSeT<68As[Ps2n ᆰSCT15mm f73[~>=lo"DZVY"2&쐻Y:2&Qڞ4fZt˘@1`U D-j;J&\ %g ~p%rAuHգ_$TN:H\v+,Ə&3lr+Qk;J"QDcW"?jwjja&\"UtS?U뻙qTZpurl`=\_e:ʔog߭= ]<]j =Vv;%;ル]^\//R8?ϔ>fU ~kO]. > 9^}6uwH dXxQ knLtdPzJ#;ȻhlȻim=q~NGrlZH}>k*Fr_HnHYcW=j\0A*Ql+;MV \^p%j㪩\[5ppe,wJ#B7jr]7USˣǕ W+ V+`t%WM㪩dpu #\`mjr=>L-㪩D7qp%np%rV઩US90^WWM03w:WMsJT:&\ WM8z)ΓykR˯ߗhq.)s J^~ʟgϔk'}M&/x/D[%/}T.'M.^GSF8J܃p?kzp" |k碞M^7 Kwpxv^3սxjη\"Y΋|_k:u[2EG{ϥY9mEZ\W4fhtyRI5>];sU5` {fg#$@ha2?)v/3ۣ륹9٣.߿9s~h]ٕ}LK/|IZ^AO"X~\zZ±M{A^g+:VYƽ>6ݎ^-Mn$7;붊=mL"u?k\̈́tSKWh*U`˦#Ed4W{m!rK~W4c LzlkpVU;>\Zchj*=M:@\#\5u&wǑRhUS66ppeہ=EWM +\\о\5hƎM! At {ArIuvN:D\wtG`U ]5nj*yj?D\+ {ϫ>L&pj*̈́CĕcJ 玵'\"װmLq\c{qr3Wݤ=X0%OՐrY{ 0dbZEaZ 2Y =im5-ྏ|z(TBC?wJ9LNi ,e{L=e@;6x\VDMc6JSDeV%2=AymXoZA*Gv9Nzl#\DcUkM/jjƎ W+Һ#\TZu&C/vGO:\q o;\p઩GWM6Mz\v^cGjt&Zǎrl;h&\$'\`z4A D78v\5'\ ,p%R@7jr?yZ㪩`qs:•7^A?OEn ڽ\)#Zݱccch޸d@<^. *s?y6u>)rkP+`A&xv#2e׳O-Q:=;4";'or2kj=}$^i3s[6;R*505=k2L% Wn܄6:•ء+\5^+Q;JVWն#\`car誩8v\Jfʲa;•eL?&w߫E3v\5W4|R ] WM&hj*O:@\UOU\5/zp%jѸ㪩):H\gib?bF#;z\1W?mz'~x,J55~=9=m%- E[~ӏ g??2{}ZĨr)E֠fY?ۏ%n^7_MHEC/7pzzuc!w+u~[\_( K1 2^o>[k@ϓŽېN?lcOhK~}vw>pv[pϖ_oZjeo xOk@ۙHP_H>r|Q-8 G8>nϞ4G4]qr&]gDOϟ߅mkEڗ \y.|{#51%է`A_&os@MȪXta%4%,[޻v祾?|^I2X\Yɕ8˒ M?s)4<7r!A'XW D7895yK) -hd@gdd֚Q9:61DоX9:-UCVķ4B٤lMraPc,mhgArWbJFvްo`BUQJNRR!fZ`@TJ |KJ6$ZF!)FfWYECJz+jݹTlVĔ**l*F8`,BQ2KӲb!-l! P:4c5XĘԍԚDt!ARsM1E.;j=AŘ F֒7Yg'm9tIIY׶$mZ)iMXyJr H>FDʿs \(D=77JЭW%TB*JRYZ[U^U"\eQ:yOPKG5VTXkC|aLqA6HԤ&,CK4qb$TDڨlM%X'"GkP.iC1; 2 JҤNb)Y*u:\`c=uVsCB#"E!o$t ijb;*PrYS@(JD㒄Vl7 ܥXpQ|OƮHE":]$8GJb-^bKU^ۑ-#XYS3BX2J')yJGS sӖUeJqX鈋u4hj5uv`j9&pD 6cnЉSڭ͒xÜkm>%6I, *;%Jn'&EnbԓpKzLUT BA (^&LI E\Hd»+ye1{qE頫i-E2Il| b8b@zۈ I( %TM] D2Lao e`w)J aY"2%\ASJJ:>Ueq< A7mN;TH/ "bp)qsp2Ð C&d ,H"z*r$‰`d !$TΈ޵qdٿBa`," &$_A=c4'vI"jKu#Ķ[U=Nc@HqHq[+X5h*f= 0"}\*Zr25ti)ը $!(yIلF`#/=kO(@H*+9RO[!BI1YV  ѩ-C^b-rI`UWuD8Md3_-]b_|1#9I`BTRXpD1cy}wͰUc./grѵi.m\*MNV˚zޅHP6yD-$t>bf@æ4dnIKD@KJ7AyA3 vڂ|A cA1J# I2HeV**2na@) /. D,/x7Vdlh >e,XN~d} yrx*ƝfE 8-`9%^V0No7Vx`í]}'!O‚.&jHmE1+pmpi,{%!z\ZAyH!.p /룫#D ^H*Aʀvw %tMކFL3yz5|< hw6Q$D#+N;(@ "mR82Ggm]"ΠA`&Ybm4 K4К5'7ta,o3,4LkTRp0U U庽zˬ Eaej,WH6 "kӤe ak/t.Yu0qyye:4;“0״餬bi$趵X`ڢ+Z`8FL-FTai ,E4^GuA$ڻ<5jhhƸq@Ϸ|Pk>sp0(A/7C|C ?pa~h>ieg[V8lJgŽ+g.駣?4K7%m2 3Աd>0޴}'Tbt:~TSz 9 %P[Gg^t Ùw%$\|J &%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R+4s{@Y7s<%)@/Q d )H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@/W df!)(,ȫ;%5r@1R@%cB8AJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%ЋU9H `@0~(JfjVZGJLIKJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%Q}L֥V2:qҖZO/ީ<[dz8. a@i5pI. 0%X)= ^pw`ef)1})`o_5e$в81~*ةְ3jUٸ(}4 c|R^N1dZt̿F (n}uFWo/p+,ͺ'J|xD^lk_SKWZIjq|59;?o|xTˌ_8yHE:,EcVw>՚yjKhКl܃9Yfׄ쳇{sӜ{럴}oW1'\J++\{+G]_TNLjz(᪭XY) W\k%\Z_)<>\J=0``U3WC WZm=\5+pÕy@Jf.?fPzJ+!]5=p=2ج{`gD_bjߙ W0Ùj7mAWZo peUNj>)z}Tj֪3ج4D_brn(\x`U)k V NE'Wy?UTc)kU^rz{6V~yɕl!&, uIo5N-f_=7hgqjGݢ:R5)c*f}Q~Ch#{3`CNvl WS.14۾ JD[ܣzY[pU϶]RNo:rډ0$ Fί'Ff }mNߕ:hW2VcL.t;XEk[d vCtyDnHJHq8KO:(٬u{?+#OǮ0e2XK~gލ^Lk'gWSzhs#`/`Us'LJf-p$JH66)Ok.!s69.yt.f-q7dɻpqOg;RNMu]~ůֽW]E euʁǐuǘ+'W[KpGT!h+tsOߧѿooˑMdVzEO>8A.Ն@݌+W8gDh,B;WWsuɋ"Y%'8{BeVO.|FkM7oGWm|"a-7m|u}m ٛWn.w\n0nOw~ټKc~>sq|M-jUNrߘ\.κi6&eSCUtqr)RBêY8x,b^xޟ*_Qg6}Em蕁 bebˏf w&꯰+ly^N&_'?-h |7_-ht3V圔ɻU-V ͡xC UjY.Oo~{J \_/H%t>w!gv3惾և?t WUy$v @0vQeD!UvZ[V1z. 1zU2Y Ar /a,kӵ%D'7-/ۂoöG;vǣ?&(/c\wBwlڃwlM+ILii3:|ޒݬ crѩD`P8M' ,v1Dѱ?I,*O0QO5 zF=$L%4V" BFdcdE䜓m炱6Wΰjt*t$HL환}d֟%7Hs@];$9Qz22HETIP&]{'yCtW4}]b7ex1ksN 5)ю/qT1Jsmlb=s k olͺ6xMsWW=܋cn1գwm6,>^G\\tqr­/>1 bz%7$CAY{ 3i,];ҽS1ι0blA^~x^j xYm&Cآ` 1Hz@PϪVCv HBK[J(*sq_J˘SOS?+e< AFas;"6ϧg^ul.j(Pu:-;]Cf77{?]36]ܡiYEO}hMvg#w67eެCBnqcSmo,0pw_vw{whBWf+㳸5qg;޽]Io/|%oܹ.]]ݱk9G9I e<5_Gn6׼խ _s- 橻~ϓ.~'~>?2:4܈?79&>37yAøOANK#7eOw5_dp$Hp"{icHV|^'bόެ#J ZzXO^!&&;?jIl{S)JQv4)!X(N)BŶ-dc.L2 D6sOL> ~5RqktʀlDs.%RQfJ֫| *DڋXUoۿ)0dQ S}vCPX:+jܭd OcM *MoJ|iu]b 7U,.^?|܆f_|V]QmBQ9+e@Ҫ+ ڔX[vA֤3BG||\]UUM#!%El,l(1a&* Mndgtn+Bg,  朞5x"?yurKӲ._!<pcnuC;m) g2aMACimc:#blQv-˧΂M>:Iɮ:bqq$^ꌉFVb2:(YC(onZMP9Gkkq!pq_t슇q7<|,9~imApsE?Z6]cs==z{x) c](b <ogGֆW_?ozzߌK8":(XRDh Dl&0j(+%Ucd"7we L=X,`4f3qbG Wm7 '/BSU3k6-uAPs޿;x)KYO/}.~QZ^1.9d={C;a˂Z̛yFA&k1a'ڂLl!3:$U3B.!ptL-)TĹYO~G7#t"sbɗ[i<ڈrgԒ7x&xN-fPR|BMHk\QBšMBr3ӷ4C$!` >:Cs%c z-Ƭ*l')]_Aɥ<]6syi#dz)WoSHUm**$c  Z~Jw zմmC3EA{gOě/|9ø7AH%1LRs( lUG}4dB4 gr-4-UFhG8RFK 2b9j(+:Uw_ M[Y?NAkj'׶ˁ+oU:/QK>mQ+?U[  juUrDDk:;hsXh◕ZX~(CI1-_}I[@Dmwk*'e?A?v]\|Hf#zqPڨxF!l?Nc :v[^}5Go_g\W]3/|ԍ\<.Q^Gm&޶ÜNpjx*mnM;Կ?ZS]r1jp#38^=Ĵ~!îC! l5NBgKtXXtH0\mQ茵eMM7}`?`/ 'Wqo4o"qڟq\>9 |+ o Ϧt5$i]ի;~'kUծ/|)6y/h:[$&vʻ˕7{>mYk|3N_g%:&0Gvf)Ss^4oR]G[}PrvK LԪbʐFjQh]ܳr:mNu`gkޒʬ0D> ӭD-IglMt!xMA fzMK}3,:w^%]Xy~꜏D9E( 2IDC\JRPNڤUv3s?B&SY;!ƪU9&z1G0$US-w.qZ٤[UA 4y5{dk9fX}2$]th G[͵KnyY_|—&zPfv]/|駭Ij9}[Ҵڮ(NIE|m|fHYrP}QE] a2V EVcjj9YsifvY^e$+L[wVE,V:cKqUEV^B5[5X㣒%Zt5 T0 Vd}fRtԳQƇ{ GХ $ *p3 XS,NlBLٴJX.F3*-x9W$-p忂Gٕ BPX(:ig4z#PT2mLlju3̓ft\Umd}{YS},N#*+k5koK ]cĎ[{s KeomoVj<-X5'YsBTA^'/"5Ɋt#"s;;ŒȇޛDdW3Qmz&ncŪb Ib͒=͇rm4lbbNf^uW&!jGE'ٲ-%N7K(g. @ v HM({ɵa=ˉBRdw! !4ZRqH(4nI!)aBR2qjZCҬ[`/pC=F\@p0YJRbemhIطEQ5R 6KLbhCb Ҫ0vά3.1N6-:-em|2FU&Ea ՁH++! ᬂى17@\CRɬQ%|5ca٢B\ :+EE;uΖ?7AfMMɭ؇*1qJBZu۔=ZsXRlkU.rK* NYrI:=>H,:!ڬ,ohEp6Mh/Xj`I0HSk;(Qult,8!J CԤ cM3(-ʉCMYultK-е~Ǝ)knq'ruU^XvmUk]p^|{^bN_Ϊ9_ɧMxUA'~n%N~ӍýfgiOxYSыqf^YX.sl\L?FoًW+O֎590Im3 .fN#6ʹ4t}Ug-49|Z sBhw\?an\>txs͇ڜ]zoNN+}%7X_*9gb-BWzBqIV?9 |X[⟻z4jUf Xz/f(Xޜ6$u qPnȌi i{^۞%t}GϦoUb )DJ=drnf 4r :.TS+%`sJQUd6s#*9F RRSl=w6-#]}{[JJ"뚹@q:r6  }v+o2>F\g2Dd.But뵂tCH{z{` *`peRC4\@iN&F5Hri>ƯXطE4cIEU 89clv5*[| $_w:^##˺q_]u[7oT!hg=V%xV6T0hFRè.C*Ea٫[ZoV_ˮ U6 ^[f4̳/}y|YV P 8wM.5F`]C5590>~5siF}rh\pj`Κ ٚ{5{qiϳR=uu'K)j9P˳U 9ai1J嗶i҆4׿ƱX'4|EK+->ũ\ז͏&/W-/y>ӯ;:x4]%Zp?&Ѣˆc('6WTNaNIO>{}3'W)~nG%q%*wlO_݉-V{"SO 4 9W|Pqb|w>~hzd,EjA?DuhNT0w_ywU][o[9+B vK t0˒"ɝ8-mbmyůů9T$AW6n?zC]+ Z^Ŷ"gBZ\j#i͎h+5kh]Xd p4^kfE4 ]S]0t/*Б * YQpԘ3Q R n< ',ayƛAK qԮI)5V[eQqґ2lv-;=88!88 [AIOdЄEG *$^hdLpL1S-͘ o<Ö}Gή̄xO?z=6B|hjz^y~1Z=>RcA%\U qμK5 /q[첕"#fRi6rؐh5Lr%NR^r*yR8g8ǣ`6,s`* >Ecn+gL4i)Fs{OFw/wm2 "g$R(t K4Zrh"~. R˅&,POxݨ=8VB|..$QHTA:nݱx}\/hEyבSK L[WF8$3}eqKq09RA0LJa O1N6p/3PB$g'AÚ#2kIqy@89N^jG.^wa?`Li|׊+\BJ]gO;sx lT^vr5*8{( }aO~u/crlW$j_o'm͕/M-)[ ܌̪ʼnb(GroϏJ ~A/hg0߳cӇ uúP/oǟ?_=e|ϯq$LQ"_27>iUilo4װGӊlivNv[kk;8~; ݛ0S״yejN3nc 8oi]qJR)"B*ZC6e7w6) '}5|JAA [Mc}H щ .P*W%=30<ʇ\ÅLPV܅\ǃ^2-Cfhap-/ x6"E&RH^4N8$&Õީggwܖ}9>TC@N~=o(A +54rgs:%*Cxz:~t>|6 '{*vˆ z.vz ǫLH~5!\Emw*7|o8(e= Soy~m^־s${J}l9 azvn\F*ߞLwou_w{=cPbRB^q/BS'4q|BD p04s!cCj56Tw[Wŷ9H\kJ=!&ji.*`wI4?>ipTȬӮ2@o}hlF'VڵŶ"gBZ.7=z>:? ӯxmYt$+\[ >XYk;^cR7AH.DX.]GT8ja4(I$8kKH.ojg@2>9.h-D<NqO"8RD8'jnT3B/:O!B .G(KΪLQ;yԉ\9BX``)IJDڛ$Tsfg^ }AdE-*P\Aj)4=窣*$X1rKHmabfj -j j 6'x\=wc?7zׯ~-6184%oK rcIA3d*x҈h>8-=\8 Cp]4\*/IvDD02W.&nb -ZmQjj+J7Ē#:"0tp&hQ ^J*XVS' O= ziA=dDЋ<*5!/YdK1&օ$TGema{XLuQ7#wO"ӏC-,le"nxX;Yg$r eQgЗ:2slhJ/2 }GEyPR3.8!(Q{4-b1qv[ďW_C8'ZRjUa]vqƓc ,TPu\83DsZ )Ux x*vjua{G0aO6)gefeoKϕ EЫ)\w 2KmfTL/;oVuh4C;D% oo &[N |qPP!%/ TU oo#Z&QN)a~pO ،S⤋ogɌ|oЏr@z'_'2~[-SN(vqdF7N'bY!i/;040BAH.3~Ϸ~-t=A_ZHŃ׹V[Rfj:F@VΪDgi/l<)I1#1xa)Pq"TEE'GA',XC*jǭΌ+gUiYŸ+:rYԪE]:/|iIUw4 >bІYo|Ņ[ W V)z~Ty FGx;;id4c^b:zTTp8xD dXApÖ́GAK8*%zֵO>4&zⒶ9o#{ٮ b>Ĵh>wD>t!z%٦vՄ>gD@*M:iPW1-08!%|ƐqUZ"EoaL1j/h.< Ď+_tɔ%UL> F#gzqW/a]gZkCSƆ4T]?׭|Xv8gBGN)Oe!wBqVJnD^lv1R(t-) G%F=msmbۈ 6ѧGFhKRZ M պ'7bD[G9r~0t?"s7SѤ A m}5]X+}PN\؇"RHr/n Au|սa*ߴ~nL3ٮyvZK!l3`"1`/76nlayw>n8{R˥G'_F5N_D?ʽ}:ᇑ;/D/?=EQp)OO6tN[EK䜛c&&ra?hı׎r& ^W'/(*^hDNiT9>ԅ!ǔJB#HKM(fDWzL`'mS¹ϋWd3$ǐCd C%h=?XsflK.[r.6inkاb+Dd(QN#ɟG6G#ns-qF'~Aij/b^Ili_mNN#<$+ozQZ/omk b4*5ZZ50!,yB^몢3LPoa$w $f_tŜI[HiQ:唳SL(frFV6l"\Rց\msVx_=&YF#gM9 fR0jVS4bqกB.%";M #PtD̚%Q*vh߼m&Wo η>>=ôF*)ZP:X. jӃNsJδffD")Y+{ ٠eohujaL(c*V(^dQ!xmZWV$SFeN4TY7,-VX0a=bDT>pWC϶EFcMFE\:(\ h*h@#o|,wU*E4d2ZO`u˶Hht\<̷4kk{C'ɐrl+?D[$Lq#X/ȫ?J'EHފ#oؑŪCeɊkR)$e Iq[⦐Un *R4D]ɘE#KZW 3P"!dpڵ! ==F]5rZU֫mWW@Օ"޵Ϯ!b8[͞vE ުj4]FС.#lbOѓZySQKԷe/K[ޱp~n/uS7_?_\"֦?B׻ΒkUhQy"XTLV^!*d ڠrAQA@!K5$s2Mr|tt|s{Ұz+ƫ+ٟe)F|r *dYz߯>9W@\4(8뽐9z9;٧9 b$R 2lƐߌ/T|jTbE8﷦}5svh릢d;k*۽1xCU;o}?}'qÿj}%;Uk_x<ӿ>m/>7(U:v7KY[mz5t)"|dvt2t ZZͥ4}[oQqf?*|ܥ QCUE|Ɇ%9d6Qف ,mN_km:oY+>˥'tuĬ bU)p"ډtac,R&j|Ŷ}6TmKVβrBk&4"U];w<[m8j' 88|}&[zºVusrSٜ%%9h%Zͺ&׺ ɲ/TP% ]0bѺhؙK梲 ^ Zi\A$]C\SU6yo޵q$ep_v ^'8B?%(Pr U 8MdփӪuUuu75`a`4m %wPTn7n롘ڕ OAŰ`w^ƾq1INIʝ<aQL'!y$)8I0T3?P5롫a>0 f6(bXQ ?4RJC+0R Dd0Cs)( LU+;T A} ʊN[lB!vvtŮ &YZEaht5;?L0$iN.m1353{jv^ Mu4(C \84< D*~ BlJ5NqyZSۺhP vS<~zq~rS9{ }W\G8_4sKzn7$&O_^jraC':{b|{OW]ݐntw75iQ0iN>o]]Ugu9Ȯ^@&׽VrGXHv:}W,R bڔc QNw&~i&fϾywߤ_ߝ&?_:Xà,A ]Z[vMۣkr}9C]۞ǮZ(=-^]bUM2sAC׻&Xţt%/1j" FUZ5WJ".V\+_C0E+C}tux6]m$łɗ~J> HmqP_L'/6p"\S'`;Yv8,!.ȳ#dacGNb:2]˙ealuU־lk3wqh=}jUͨGOv͈H̱'IBf6NX!`DFe^kW*(ı QZ219YM3)6vޟtiv&[,2j.CW-g&)BI'Ťu%.u=ԺlZ&9!jxY%,պ?]۫{Ph{=̧OwǼϚN~͟>B؛pyqQ8 _-M=sNڔw<}mo9 j!蛖[o$\ L')1,')ا %{ڦF>mIkol3?UI|3x)cKJm)$FWD{T>rj0zL9ECP0QSlbs(C0HaSmw^g ;RᔛG8l5hd0 QkSM*dK`!#a:0- 9rGMbqĶӉ01F͵:Emyڽ3%qX<@y@3D`IP)w-%,{!0'Xp0 [i4m:tVqB3  D9cH>rA8F}ʾ1X18EfD="kS0Œ"J4[,H$(F9j D3u,S+HiEb%^2XobA% 9 )0sɌy]xpZZ3*9ef\=.kţ9!/ Z1DLZmP iqO)Yp1d VScOǂVǡx2: 2F$9*oT >Xkǜg~ܧg٣XDAȃ] WDL+ Â;2:k"h)d)rF SVi!!BgDI(sDʲp*!vsV0bQ,4#`"2""N# #(H80MD:Ր=<ָr6~[oJyWv)0bn(UF8x>rNGxI+ǘy;PyZFY !nZޘ`{DxDHk]=0! Aӈ qAs")Nʠ9$:␳P#&|TD%T8 5n|Ms$˝0;z#v9λ^MدH)NhxnRntgrS.x068PR❆M[0 SuP GP0RJ58 {< ==|n]_8TA %J L!Goa's#OIq7Ϛz'9>y)~(8BH#<,`<څʽcSK}6?bG#U-:dv_#␔ǘz/(e73Ǹ? me8¢)NNûۈ)DJpR2XxR8"VT Gs=^PN8N4avqe]ȩ"TzN2T"s)WGmSOsMB 3U!;(&1͠=j;UlQv4wh0]+wKcJj)kfOUWGM]꺣i݋F0mޘqy%/ O؋wOGn{ׂë~fu^>"[]ˬuK(5wP ۘa9?At$)%)HɃѵ5p=M6jcVŜfIl?<t֬ fkod `|4l4S`y1 ;A &IʫQm<ݸ )z7Ze.0-mڕ$@}vEȶ3}L^wot+DΑ/'E vLL56z-7owd)4wD|Jr`|$GP\aʪܻQO:>/ jl 1)gj{ȑ_7"Y4ps v>,|Mq,lO9bKeNJelawMVbU=J!v-M:a301 [5:@- *|HԞOrE'i}pA[U?1ϵxtjv2Jզzit*jfkMF!u< DlJ鄲؈tlcݶko<u.е!uhF j=-(л ^,N DLQ:yۑDB $t(jaipm̾À["€A0rUΧcqYzRGueJ1ĔS m ڸcBc `[K { QeȤT%`FH1 d{9^SIw@BHY@{JQѦP 2KWY/MA*D[P0>ZTy  drtMYv,#1`1=%Ɍ%fQsUG%Z,Kau\*k&dCt¿c3Χwh X>ٲ }կ :Lt7>C_xL )0fj% wtƚ;^s`uoD݅LcH#lAVj\-e"_ytm2Q1^Fm*%ya:8I\ӅG4`=ݍKqWBX̡ٿs`E;a`!d"9$AN %B'AD3vl(&9ZTl[e~AܺjO &CR i/!%O`b߼T+]+5V s")2ga!Xe".8Th+(F,lѐ2dCM"ƺ<é5Xcv @.K ^]ԅ5/Knq!?TLHh&{J6'vЎr9(Ŀ=W|/ w x#brv~:DKVBAu"uj"1d)K<]ͅ苡OY&O.MUw1';K,OOx%iETί}R b0:tXw$TIFmcg=72Lv/ྚnGdܯJ;!&衿%lf}䫈TzĽ}n 1d1SF};Ѥࢧ4<ߒt ڽϴI{|9um.̞\ˆ~1=K_t X"j׽M<]2Kep]uEix6SlGډu>!p Z(G1ym!a$DdZWȼFsd4Ȥ3ϯ' X&=Y@^?_,5bݭcI_Up ߠޞ#hl{C4˅+a:S*#/)*Dݳw' \:Mޚi׊%DOOgV,6$LrK82*.C!j;pF&Ʉ2W,}^Ux(JjJz 9 se}8'w sj=zsRf^")dX`%*. 9vsUK4WoJmuvu1>zxJ͟bS;EBk80&?&ogII9QpܱWʧn-X˓AE?',lۄWgOyq2=}s?׭=G=5O0[.n`?|`݌KkJ>,5nrZl/ "0_ ؓ?F3FBln4UFbdI_&ۯ>N<9w0\WѡxrUZmU)WSݓ|%$ nI~շ{շIkq?ۤt#3W̕ljۡ ; s'؇\FC1WUZmnT\sidX`\)PbXZ b)5f^B+mv-^&~vQ%>ycOJY~?f҇O_uQY#: 529t'ei*11MOOzŷ{<בea[99}sI^Q^ &,,LSDH"8TmD _H)v%ޢ䗣/8 վw9c:ݴEPdo>{ϼX7i Ͽ}<5j[ 9\q5^ U]Qո\*0%OAhô:iCX!5RYPlq*5C}Cu۱V $$*k:ŐQˉޤU h¸س'aZ$INF*Ҩ٠t&"+Pcsp 1FSJTV}!Xr-JNIE,qL/Pل8[NE797{i'yQ~~i`D]сbb` =a1 IL''@΍0̧.Na>?] ;ϥ?Z]X[.yܒ3QQ4S $hJ^k9pGGl[6Y)XO] boh\r5N03x$Vg7EmՌ!oz0u53bW_D''í Wkθ 9_Qث՛WV\*Usg[Զ4ͳ{ T?6woZ\Nוo*\q __4sjn$jo&wI#]v úì3+IQ8yJ>=_9.=52}Cver5yd&3]tI+֫^D*&g1}5I._LysQ@$RW*&Q]*c 'A$bԲ/՗:r^I{yKAJrR2BԒL$My+RhQR8%3xC-@rHYsDy%,wAJ{ ӽަn Jyzche.9=!,2(IQ:qΜ4GTP2h)Y#CQdTFRX(cbBE B.cEѤ$w;Jp A42#g72Uaa1x(X(z,|T,ʚf{ͯa1wO~Ӡrh˟".g,!2qD:C4,' u4Q:fFMNC6%"a+㻕٨vĄaU HЦtt)rv#㚉y0l)x(j¨-{5ح-YD"S\(C %HaPޭJQy+(8C LhI3sarˆX݈U{2|Ÿ䡸 qmORY+v"ZPM{0 ɠBJq1pPw<Ma<4GQ6!Gek`m(zvv~G\)Q@*E*\]L%Uw *k>ZУ:GupǪG:]*7}i!KQ0YC, ’h(&шPKe !vsQ yTJp"4Gp uS2 @M$*Ab9.Tu(RXK }y6#?NvۏXN sec\gМY76}.~ߍ>핒hh*4Hy:=>UȔDk1z(yez`rXwWNO^E9wFq~sN뿿wӓK7#F8ߊdpCN:8y7y"ړ揟]x8Z~/9J:Xq.WW8xb2)Q0VF:_Y͠`$<^ G'/fY0;F:\x!uo<A䯳{Uw=ʅrJ6hкخgWt㛚=ozh7$o{^ASmcNqSCwbTg\^Xނwd9ӓ6zs/ȍRPso· 8G7qY)G$$ڼKZjHb22# HhiO6e;YeɪMU^[.7|bFkyS߰7ȧsG(„!H"Dy%De[RE5(Ƽƺ(IQzE;B/E>E3}|j=7k08g6jYEOdX&|4b2;N{(`DbJ8+Ċ㼦LofR*XV*luT,6f } ܷb8+F6iH6r[+ޝm}RJ4)A ? AyG1 C{kM[A$@11`%>KW{ڲd$X'z5.KR2%)غf{poz!zy#eՕPsY9)zN2/aXP*bDE&JZYvgi|8i@<&rB3YJ$Hg7pnWM'xLnZ ."4KMӈk~"dG( ޮ2*7AUxƄ’,(3d87bBQȅ`I)k%$5)Uq<'R}z?WAJJ B2MR,ojs:1)9(m̙?I٤W0. NC]0 9Gd[f9Qt S4lK=ši5*qSvA1]5QNKz@cCcFUV).{WxWL9db$)ƶƦ˱ݠ=}uvFZ:mxp rrcw;L b!FWlb`: Y |w\n`yn 2fRj."Em Χ1;i"ː;ok?{;ϴ#Mذչ /ꮏk.iOWA:G%kեXC ؠ͙},aΔ7ڱ;:]\ͧx;p7@fr r5LO4Y 6hۈs,-nly֙AtrL}hнzZ?=k7y-R*R[[kz^j>W>|h`w6}d>Z^OT/XŮdE*{1d io"lN'RŔܴt^K&d+ZN*kJ=x}).y[ :)9Yo ?\ ںR bJ!` < KcYy. |'iL!5Ƥ=Y:mQJtmpaad)൑;zmԽe|e|iJL-1(tAE(~Q5ɤ$(Hف֑ +Ig[7 !+^ ._tW2|<+U@iq!FCD,>JDiJge/B;F"h݂ B&2',1,0-Z;3 鬡?2[QT' 9iE:!LQKU+5ý0_ R7CnchF~]N|H@E@j0f`䣉+JXb =6@icbRGĮ + -և(5dWVI+Xî *nŮ n;*P!i+WRt AM m}'toY kɩ*ڴ>ƣhjW K^2t 5 9'G&A]lX'8!Fg3ߗyo2EygGCB彫"|ٛXU(-m09Iw I?'g ۹;1T[LYn1Q>#?K ţ[zBB kFgsVo&#gIAJW|8}f--0~eu/Zj2i34ͷEZ7B?}[jY œߎިj;Olh/R(JB[ٍȌxyYܙ[ͽJo`'뮨~rxZ<:}%<ÅgtrM$JI2uQ55('ӛiΘY(kd}5&^HӄDžu6DT13tZdteNkqZJC=U)E_*}]2BvNJ&/|͈yctᤸIxTt'}oO9ZZYu\CFV.+e~$mȉHnQ +;D^~ .r^,Y/v ֢LkPJp 1TDz}}tZ {k^˹'9 /}9V٤a`IC $ f|rP&_){}3}SR(Fa94d-:PYSb' T kbmp_Jm&pCˉFEPg:k3&XڢN2I(s0X)3.hLHsm+.dօL1&南V|\23 ih.j͸BSk,=IcJ0 sA /!mI$!LBH&ZB@srY?QHa$API_q j#Yk7~&asO`U_ aj] ˆw|\J O@Oh@qIos|Xbìgaq!5/WW^d"+*~q'#b%qV.6zv6htr];`WϺXY׭&f2"a2m\=˫I[kՄU t׿=xׯϷ/㯯^z{=xחo_ F kAp[oM/M_S|L}no2sk|l6(.ekC@R~z1NߏfmGEFJ_ϤaǩJDGˣ=ꇋPxڣz!6Se4Xg\i#&NwJ<ч(XfrqH}Q43Wp*3h d;ȃv8,XARnk5S&.fLysEoYCV 8:lU5p+ưMl%j&}lRҊ|]zC<H䡶LH9֜, JCýP% e2 Т1&>$:jF2Z"oېxt]/NYԢВ9uZl}&g+:BiCHFÔkJ!RePV^+'sv ^޾jԚCsܓf׷A?Ş~~;7d( 1uA'=+][oǒ+>vꛀ9'M!1T\+BRN }.HtSc@i63U_U]lպWU|8@{4wdzݢ 4Bna*0yITbC;' > 愉BJf/dl-5Z$#9'bP}5>~(V5B*CR>Puz_P $!(s.60-Hk,l9qА Oz~_[d^>`#WmGflv&8ZjOd9ü9zhѤ";DR$1ɼ&YHu/尓o8 0X`2)NƬA$KBmDN@266# l;++[l'lzoBތo= qaOעrX>$#lI1Sah)7-D/ȒVFjέZPa ꡦ56kRDSziE:p O# s ׇ&\-B,BdjrLXag_d<*]Lkס0wjGm s} ̶\}X]][^O=\snm7}]Υ-f eDhu7n?T ̣;:[==΋^yλO#;٭uiwfP=f5ri); E399l\b8nvnw@>Ǧo6+ 1,1( -Jub>IFgȺtUlɹj%&ra"C9(sJ`VA0HFfyqfXL2Bl8bQi2-ɫ{:iy~Wp~>|X~- H !J$2P>@NFڜba1I"H6 ѩ|/koMel 4"zSHֵCl;.-s0 tں1jGY,4 .HJPFl&8[boВB((8WE$f0EP [IeH.&TC j!4fy{~S1P|1"GD%MFdIVv9-\BCٖ3ǘTL0C!Jhe8sLɐVK-i$y5r(;QG&riHfRr(.ƸhG\qqC)ZJ/yl 6cuR)2:[(zJq8~4Ojՠ!v^W3(H~XNO'WI_~,[~@?Nrg=YvQCvr@kc^2[DgX3hƿ/L*|s/7aF}qm*n !-맢tU??<]G9}of!5īaa襉~[מM)_^V=}a+ w4]2pudahl7}E t]$b0•M^ovgSR3Г?=Փ zg3E/q1qJw%->ʾ~,dsU_G ޘ CWc֒wثzq$K5dc2*N{N&UFk.8oq17ݬdIx6GX~Gy-_ yXf6+U@/{x*2d]Y4.P{ˮ };q#Ӎ\ΧooCzΉݤkmA{k#?z2SuoXdݞ-!6S|ur#E~CB;9ןxn`s+3XtB)a,& lS)d6QB)FN7b`HGjt( a&I`B6^\֢VRU)j? [dHb Ua6W%0W݂WC)IL6{1ǶpXGg(:ƲOT6yجhVU>5:@msspB4]LBw)QPB9H5dЃHj,uja2% j= WXI"A8GjhHh4Ԝ@ $ͬv-kl6hJ4"Eॴ f}#c@Ik6[qCdy9r2--ixOٯoojui\[^گvYIMz}|ffPY+ԘGcxJĶL2e) 8b V%d4wټ4xԳRbK-%`%PRya( xb,l6+t>l6C> +Nyi h j\ŀ0G;&cݐC5Cq / zk {r Pǥ[Pz"QN6 ca̓MXlyQH Z:t P߲l+T*=4 o8nHPHCR9jZ؋{ lA[Z'JX&HP|( UVF|`zh>2LY@DlkvFphNR4͖0O]3'9w,tY ӕ5+kTa)K鄢P&cJldb V"۸Q3oC&&u$R|BY?& ֺzQܺeYC 2jp4%Jw} ƽ Ѡ/%Vk$RT)`r2%Bm Gr6HlhbmBQF'mMq_ZIgKo#)!z kU/]-3ptj6ۊA K𵱀[TLV%lS]Jg HjB׎}Vp'r*<;p;O]hDICw@dH7nUB xZ7`m3$Tmyw?nI_!n5~ `6Xle/4aO<&~Uc44CJԈ= H_?WUJW.蓍WGQFYqMx W>06sm,ٴO P%D@J OITV`8LbAEE>t{tSVP3QUŽ#Ɛ6YnQ,Qt&P%:~A:k__Fӎ+ʄIYQa |Xy*.D R$TDo)KO/K} b1-t.@H>&y-J'xME,)Puӕs3^j1X ҷ8'⠻TPgܑsB)n:w-ebRJxoNxIuƔbdYEJ02j> 82bMBkAkҺ K \7jm|ZTcD`WxX4{ IBf\K3XR'Rp"֧iO=Kl$ԁį WEL J?Xzޢ38 Rr.l{cyC X϶Y͓9tJXd;$|:KU}5P=8HHqJnѼ˯ؙ%H H~< 8uūb^g/"^'q?$ a<`X;&,$WrV)}XZhҐ4q EL)E=a8K_zv>VϞ|h%’VEW[pm:~̘u[/ ^’b57, -u^}T W~y{1Z<`1PU}3g{(^~u"pgnYY2,^M'\dz?z չ1VhipXyuFdu(/蘵dQU#J[DN=v؍toj͢RutIƝ s2$L%E#FJ¸qI&X07k!SSrT*HM \"cS38 7&%,6{οz̍ qUjwR}g?rY5\$(Rd5"6FX^*a+Ry<P5K5?Eпe'TRWps%v$WQ`l#z|zQ9K#+!Җo{V&2RЀe%+[&&ذ >dS;v7S˪/ub |::~!Ʌ+t&5NҊG%&z.}pCTAgi "`@ȁ*ɭ5U]nyb#Q7$-XܺFA\X`/xP]SO&X*oC']_ llތޏƟF[7eqdmW:78ض;Bw[^ŝ;9nn_>ESz< xszK+q6UMwfj~zǑzH^poGzvÿ/i9XyP^ra)9U K* D'KA=)yww"-]9 B2힀oNu,wؾ^lu$Jk~Mb u!truE Z;WebPZ-EaNF˄zk6(\/uUY4dUzPVͳAGk>]f,:⊊W/<Qyߚv 2*#JghqÉE2[ֿ GD/"l Zy!,VrIx K#).i&0)~3]'aȑѵWs_7nÇ5a6>fnvO85~:H7 :ɐMUfT3/P+ .qtJ'*ZJȅIaSF^2՚1& ɓ+u7xzdIsM?\-J{4 0,z,lOj23 2 +wldU zHzUqfy;!7pyO8=PxE|J'eU 'W >|,>vh4|7<y]磳yzq̹1o gv6 &ww=پ'U,-y'; p;hwQ 3x'O;P|؂:$^4I]nyƚ1ykq>an}F)H4fxd,N0@ [ ]\A+DT Qa4JYu_I䰚ۍKZ|*ϬJ:ìs+(sN -9< p<%9VD9òKoe3?>,V |AFp$f.Fv]r@Wkj[rxd. %OOǒ K%GvE+qClv(Wvv{AW]vzjrFt̆դt(W tu:t6#l R ]!ZNNWozJ3+lQW ]!Z{u(v}G+ Ir]!`ɳ+I.th5a}+D@W'HW(bs20ً-#t(j?Ek ]!G-]Z+zS+t:#BVgCWqEWЖt(hBW}:A#KJK j%9L*.`'z CΘU}q_5/w_@I9qd!JNʪʔ`z۝-՚e)f F7\~ Q!w 8#u|L+i9^Zyv(UJ+1ծCO|4!G/ ̇$}+D@W'HWXLӌ +&+m.th;]!J:AK]!`m+ͅ-}+D)cJpLNt5$d:E!^`jX6tpչU]!J:AJj_:x<W;ٸ1XB] ]ia(6BBW2 J6 "]II]`T6tpυ꽺B+NKXIL91ԜI.(VKZ*e6?Z5l5gD4uNdY5+X_͔d%yi5ȫa{^9Y)i[SAOhM%#CӱdágDi6UVcU+`ڡGJY&ق@W=U1tGW ]!\ ]!cC)@W'HWL #hFtmFtpʅ-}+D)@W'HW\HFt͆;t QR5 ҕЄK]`AL6tpɅ-}+D@W'HWRsrRW1kuEWWl *NWNƼDfCWڡt(Jkc8ˈD.=]ݖNFDW0&\ *{cP.]]YZNOA&`2sF)DQ_v+)TIɆ'MpҤOHkԌh>[Wl@({WOj {p1yl ʴ+ؑZItJM{Q-J tSˍ ]\yCkOW25   ]!\r+Dl Qj3 KWec "cFiҲN$aRȌ +J+d.th;]!J:ARD]+DZ|WHWhIyFtˆD P*J:A2Z1͆BF tutU-D=cFYntݦj*]볨Qd׿B C>n} X$"<$;/%Gg ﹤Lkm$={x-&yֹV6*"crP~w9(zDrnirPz&<ESKYI 9L9pZJ!hI|}!(rB.C UUu`"b/stw]-tu:tWؘ Rц_bIﮭO_i*H6Mi7)Y#&8+ƾ=2n%m(b>,F60odВ:5dFidC)ʆ^] p 8C(lU|X/rZ=]t{mvD)ob0N>myS_zz6(BEk4>v(Co{ϏK:S`?tv/=WrQ߾ۿAk{|5e`6yO?l?Rw巹oO|y7GkSb}엿8䋿40MG|iNQzr'ծ53BHm߿wRDža^H>c1{?[&qiQ7>o?5L`~A4CACr%cZqM_˫v3Pk'u6ΟԳ8bG>ږnh_yJz]>fΖRyB2x,F'h u0_hz}~/Rfd]JzQpSP%egM z*8]2ts):o ד?)ݪJB*7fq!Lm T>0v:NuvƢ^_짅4QC5_2S |r8ۻlPEVL3RDs &XN7-b0&^FD3Z Ct%'+>'ѢkMҡ\޼yw I5Kc8W[1և*P3u@()%5H seGCwa׈fhO[(:TӨ%gC D4|oxp΍6dڦbҥ逶{І/.[Dieѥ Fa?@>R;ƈa4Bf* QN!yD!8|z]F\RHo_EbjSEKG:SAd*X  $J!wysYUuP iH%5Væ tF :3:9&z7'ߓֈscO1Ǒ֑ ~F_o3dT&΄|`5 j}ITp,ֺ[7.m|i"%h,ΚOC2j(UÆV9[0f %xDMU.Y{V+= RAvT5JZGviE餍U(P!D`9R(- O9@EE; 0O#4O h;oPR6J2Tbw%,d./VA-#12q+YgeV\Zl莬Nc[ 3R#u˶ E@P勦NU. )ٰQ˸b5hS5B-E9V G6NuD;O-`׬jX;M*ĬeBGkl}đӜ`AV5*(*-"w@iU TW7"j,8H&`b^G;=5(!Ȯd܁E7BnTzC{- ȸCLAA&󭋠@zNB€(!2] 4HAjTy>:LAZ CAΛ2MC{X +TtgqJAQ"Ł.0͑&A(gIwB"=dH_(4: qJi0m+_CPqj{VQRA}kJ< 2+=[ Btʚ9fdYk|B H\4l=x_Q>VqC hA}p0׼~EwvߋqJb6(!9Yh>3(RAUvNZ'$̿2>`x;OŻM^:=睃xܫa}Qu`mFL`-$ >:gAuPiP\J7#JhIWo#f BX9Y4<#y0 _4+Xq=F9KP0'\qjh}0GJ+e`j?&n?\{qfp7a2hh=jk'j@J넔PU>(NĦ';s Y֨+zX^%6fWW7r')vVW9[nҮwtWW][ܜo޿؏?Foqϫ >fKm } _g3o+zV7Is>oV8>ݺ5Mhۭ[}߬~h㶭v[ _=yvZ׆Ùg_0)> I?YCKtyP޹Q8N(N';j'\NDBKF 7MK3nrK+.( ] ]/pZSbњtt(] ]9,iZ ]1ܘBW@4;]1;*tu:tDWlY ]1\BWbAS lK]]1\2K+FñBWHWQ ӂ 2\bfQ:/tuttG߷4Ox3m(-=[ζ햸] onrgΞ)¿Տٻ6,WcTc8L;`džQ1E29 cM"NCk~%Õ"t :%Ӎ վt|cs.H*1tjUBY#[z*"\S_r0n ]%XםJڪs+*AtkJpn ]ZT*3+}樫7ZNW %o9 &9RJh9;]%tut%$fH7`JpecB 3P.lm|JJڬP 8th9=]%JtutD6TOqdUtp j ]ZAyJIxj?KҌmo{ `cH+}{Ɂ4^N.͸'Ђi.{O݅yu04w5!T鶑wڡQ͸ G+T<9(E1e1ԵbKҿO$LѩO'; 8Q0ft[:uG501tpԜZ&NW lh4i] `s*)t \wJ(n jd~U%'=`֝Jުs+5aUX7樫$n[zH$g,+Kn ]ޔs+Mr| UU1tв; hcWHW #Y kJp33?vP29ҕZ#Q.!];h+-wխ/.ln 2]+DӦ膄V~EQBEP7h,dy3odPdh>V>)zjΊf <\|AcQ*eFݻwВ3{/d釓^l0(fƘeH2Hzg:5Wб6Xu?u2usx;zҺE.z Aw':;_\oe"y(_'#R?6*[$5~W>ruIJWV= ZP6ߝ3Ϳ:A?X|;~+@ Wᗔw®*|x IbRڑ֮Hg;1?48ӋM_=]0-{Gۙq?@db-V#d~+?N?@DF1r/#ሡJ$RJgL*XGxD%V+NcF1Zle2XCG2]sT!Sq =xd4v_?kl*Ft;X.bʪx\_Y=+]uOG7ڷou)Uw{W@vCphfPb, gx r4 ]캻^;,)eί>l61N"dMSB0S&x]:_9ҽGh"J _-a Jt869}D#2b#>޳Ho}ȳ̩c:\_mL"2JP JU"gJP\8-u!Ʌ5w!y\HQ#R…Jj/$Ti")ƧF #C:hK8TXi"JMXTGCJ9AaPwsΚ%m=3{ 0́bZ肗$9qsv5^ Ry6 nяܧ4.>%e2+SMZ[wM UF8τ35`ٺϜv+[Gݲ0snvllۢr_U z^?N[&p\(wp)BSϘ8P=t3PiZ+I%/{\f4>/Tߍaz 2N&m Fi/aؙ{icOv:߆Jι{3P鋶}i82,7)ȝ-n[aQax6=&m|q) !nZfzm eZl&l&^O>>>#~3j$v`O6DL:]%Z.P l>`>qc_JrKwD@4:&(fEwZI%`#Ay roCcnPid"pO\MFAY7:>毃"DM]H04E[=F+Tz$A!Beiέ?дOG`i7lOvB ̃Fy0E(6U"!8hIۑHb}waК 9aW ̍&j \<1M?DR@gE F@16ȮϻWn*?ܱ!"|Q)1:/WaI:kܧR ӱqz)0+{v;԰|\sxo#m(b;7B)3Msb8)( D;FݏRP4\uw:ꆁϖ*a`YiC(X*lh&믡B⦃ɊpUud|\| םɉdQKPϗ-˙[(JI e(Au{Y.ӿ]4%D.nɋ6 a!$L>Tn9Llivx=c//pU0|ڕd;sb|{NW!ѻ)> G9GA ZL#h8^Vtg?pŒޭtwBuWYu >2`>?&)WHΨ"QNHTM^$;?sۿ?J?߼o՛/7?Q/PN~{:倬eUYC~ˬb{d]G[UO"vhMCw5@2bO>~9|#MGdAw+6H"GMTkrT@C'vMa옱clutn? ?Ib$uA)SM8;Pf:Ek::lU'=uaIyY{SaIOp8oG#XurCTD01+_hFӹ91De[gyJua6?|wq94PMlޑ8: Gٝڟ9Fj-]s;\X:deu [.3#ˀ4ETPq!=ıWEipic5FSp}<*Ɯj6(Nq"ÕjD @ B*C;>N{-#cq(HHKD28{>ʾu:I7LkmVEO1ŗ7 Ip$k#)$b#Sn]:S,V: &[כrӊ韉 lm% f7tw=}k9qn?Bm^Fo3Zu&1'bLPZ_Z7뻧9o͆QX4eC-nn{ƛ;nnC@޻{fOy;W7tes*YIzrh.ot[RfCٯ}( v:sqsM{џb'Q"jhll#\ҍɶ瞲"v@;APv4Ȉ|hK^*ƙVXɅG!A Y—Vyp6e_ه3W1!Ȣ~SL0!J:f!VP0ɑ$j&*jf(\dVg'L^DRCI[B{` Vg%S3#4naM8qMVT^=a=ٗ; Ue9+R8 eKJ.j>H B3G*VEFmMF$M)ڠLAO@ ޤFzL U*հ 2B{9g2>_-xv:_ z:t:z귏\3!cLe=D.gъP 1dX:^v"ȽpU[ZC!{Dx l#ؔD Adv̺,%p:ǔY42bWg3bQ̃`k]Q[UFmգvo!r`u))Ub "l V%}wаך+|p_e>[eiߑU).Қ54MHφjkv-QxУ>}U\sRm\?9=A1e."$$G̑,F-:ڤJ檬Ai#ŧ%d; Bg.DV&nHg*jl9 >6tHa;zeq: +}`Nqzl~èy YKȲ6AkyLH'$ok] (?U쵾d黃QVYo+tv[PMB}QTٍf ζTċwvA'{/Ƶ0E̓&<@O 8i N$ffSVq ƿw MnC^KxN7{`:MtL~Lg_IJ|G痧h?~_8{-y nX6|?'2ۏa??-B]9c2Gi%JqK8m޼:9֔(U>4N]I?F糢+cgYqD+Nwo TSckǭZTʊA7{7l2=X0=t mJv=`~$tEjf˸F]wQ~%YU-fӒ[|V~鰋r}إj_/R`hGSmv<Lgz2{a4 \?M^]QdEd~zF㭿x7?weab^6]ďf[;M"@{`FOdvLNH71Ӡ4ܧ\w Y%U:ޚ.rpEB.t#޶WC&*u7nzs}߈? s{+%QIJ<m8U1Mx修'_I!x*gK9ﺰtlǣ[;c̓yg:b{=xC"园+ .t6iw|p"u̖V Yeťڰ!ģ7>&$`@YZ2޿SM??x$*-U, s" {E`f=j'l/ e 3W}sP5W]xX7d"Jec]Rasu*4{6 e@&˴m y)|9>%ߗ}Ͼ]z^;;Y$eUp)lePXuLII%iæqG8\V5W=/V {U㏏F|]=;D0*̘%@V\s!8|i ^+oDci3כ<w+̽VlKkط=y"~#bta/LBr"9Lq2AEW74W&o^#3=G8}~]~(1\0q !Jsΐ5ELЏ0ڐ̵y\5q6(tHnǺ+oiwaY9<g[N-]15 WT7}p2&(IeR%.C `]୷L1 Lak2[%`w#"Sd\Zs| qyzs՗(3˩ˮ䳖]h 1gN+)kLVIC9( nX iEduI˘&2EH FE![]YA Ҿf$p%ьMd d@"> qAd 9'6MѕH "eBD}>l/\i{;f29&VVch s#&̤e/ q@M vj+IM9*{2 d^5d;:z w3{׸gz9 v@lX5b\?'bG㼵 Y1 x\6'ކȂ!laZ@ ^i2x@%i:F3RL9DûO+$ӟGF׷ĸ'?$>jfQbH>RL llTgz;^ͼh+J.ccƬk*. $pc~pco=ud,צut+UԯԏSc! h LL MJ6kJj&'/c%Z{V¡-,mG Q> חx9Y=J' 1cfV˜h@ne`N0!~X 1SY*2Ϊ{%E_uPs -=w}"S7\JAEbXM\51H~\>^Tv:Zd32ydX.8^db,+sԍd&16D_;8ς褫:2ikAJo:5OʤE $(@JC J޹6rc `RnN|YؖǏN:{$v;Vj$pl:,^"Y|@L EpN.K@F㧒{ੜ!I{-)J.KB^ TLi9 eAnT֝=+hವ^(#tZrΘ>+q9ΖlQ~1yx%c{!Ya3W s2r5RiLR*ʨr44'SF""ψhD1RtN*::ɽA"HgKLZbȬəTYU+^NȜ.ޛH-0* Z9<1tO5>tN(7(pY Q@ 8v"YL6MUkܦUS橴F@w>8tVbETҡ.IBA)a*m!`Z B䩅O>ݦ+l6)|jIr02˳mPQS]XWƺ8x_KYzN LEKTe&k!-6e1] SUbRgm'j[;ASwtF63(mfgLZ~]><ƽB!]~tEs_(IW#lkTC"`qU3bZ)t5B]jKBv [*kjV ?j2zVftŸ+n ]WD&]PWyutENfteqh5uEtt5B]94tE^ftŸzQZ!줫1u\r+U8OH9$o!]h,/y~p"ZqwwcgXvնG7v.u.&N#TGB(ُ7D9wJpI;H# ++݇l=xS=UCqh!GRb 9TS7ӐS[f:Gx]WÀ=YWpվ! =HF*JMzjKc 銁lFW+A+EQҪIW#XB5+6Ќ6SwabJIW#ԕznb`kvvsJ!p/OU1{)؜7M4yDm2h=$Ջ؛. :KJIBixX%Iف:;tJ;B?8dd{t Fi60j LS˩6-^ oEK}7 Z{DdeK]IWOz@5+nHWu+"Z%UbJ0F+q!]g^C]1i=Ԯ+b]QWZ!]1pl1ȴԮ+XgxtkĆtE(i 2.4+R(t5B] ^+n1ȸ+#]1t5F]Yi4ʆtE(i 2RiM}WLĤIØv+6]1kh;rFYBN^FWTPX%f<4!ZRO}hx޵iJ@;JզM3ώ)k[fhC7en[-g-ݦ٢fPn.pTfo奫oހ2gN8&wx˰9fvvxfkg'-i_b<4͸h[4L>/Gfˬ7jxAJy0ZF0]IWOziP!]1pPi֮+0jv 銁lFW`hvŴz]1emIMz])ctEL3b\i0J35Ǩ+mĖۋJWgi-֮+SjB+`Cb`q}3+Ԯ+ԕZT0]1liz]1rF+k5!]]1^WLu6+־]1.6+0a~0J]~g @JYV+#jlV2afũ蒩|Aa´8qvv$NN 5АW+u6ͬ|c2@Z V{Y2=d%T8j $I9iqcSc௜,w u}9?=]߅=K[|u.Ben v֝z]0ћ]h~TI\v}H `OY`>>~ͫ OB&?ۙG9$pV=t;#b`Rϓ5PRTS֗ӫez{(H~;Yp8 5{0J[]IWOzI4P 銁M;b\یuŔYNy_VfÀ`@\׊s, 哮ƣ+AW0o1oEWLv]-t5F]i\Cb/-370ZjSֶVäzkoHWlkFW i]WLSuewO~ ЊVW_bJc&]PW.oθ]1^WLܤʋVĮlx'HsK~!֟`T70\L#i]ƼI#4h0^R G) do症5k ]"X-_:>:>}?n_(xɇ޿ʑثm&v}׋7vq?ϻ)Kw1yn9)sl>˱?9M׫f>Zv4^B^|ݖsBX[A`'>7`TaZ\z%jn@]MufjHWftŸʴ+ESIW#HR4+Fی4Sb}-8M2ҷ+؎U3"ZS})7`](y<ȣ Ha3[엓E{J]vuǫߎ|FA%%:(şc[+Qy*śNO1\4%"^ܬ? [/R.iwu=`YhL>[$m}5~F#d8Wg-x[~OȵSsHSu}{ϭ URIZ{L,Pͩ痥y0>qW]b#^Be7 ˊ(Q Q@93pHOͧKe)['dw3J#ϩ&hwO. \.w{EG¿_ 9vŠ+YR@(gS#$,i J[Y>{ZQ|*>]>%|xpVayʿ`uvĭ}@:j~G*-у2wh`#t~z.+|o%]BR"$A6@@>AL ŊdB|({bK4JАVebJ"ORXM`PD/m/Q)Aiw]ڜCTu>D)3-R$CWcolAZNj"CܜSX*1Tx:ALUO5)V{R,D$p)oK!$T) ILûx6PJF:|}d G"ZR0kaU_$d)A'i)톴*K;EC@ #oGq2JJqD_lP FAᜐzCQH"@x?{Ƒe !۪C@0AYLIN,H9 &E1)[V'fG7ln{nw*QD2]hW_6YRXYnj(`ɘu!gk'Ss.~‡zs*Ô|XkQJNiB0Ǒ֑~B_+(-N V\jH) AJTm6XSN " ^TTlPtA[wZ+ ,xC9+`BI @dr*j]v_@2hFg-K75!1ѕ|,&(Y'eR\ZЬ2vި^џLs [(k)Az$ic!sA?PVPB ׎]SPP&V*¤T) `; n!\"U0VjVa E[~ +(\=|8%X( ꛐIv4U2T j,K8JH&J V+dWVb@,AnTzCZq2F̷6Q)jK jȄكHc7uӝ1(QdFtQ< XYAO& "q b004KPI!0ά :P%@ `-. `I. pqT AP{ TtgB@Q"Ł.0͑EUPfҞ5ƒ#%d~GBY'_& J۱r8H]VkD5=() ElDG 1r 9 VDlZ"5rP"ZzVSMȲc ^ B9TE5tXȠ 0(WtrDEŌUDcMDr|T1&PT1yQHa:IB1M`߻YE' ]gⅡ\\@!:'"~wkW f=ft$MG1*t76BOБJqDI4T*#d`&SQda֣2E ) >@E&rZ5ȼ`|ڄLkt0X}~OC }Y%H Vx$ m^ ,:?jPJ4%wV4Zl?vXIwH'Ytg#MGhTf%ݢt\5]ZzVE*~,e4 %0lӨ{ŷ%e SLЃ![[&Gy6K. _a|L:_^:L<A[.n[`3 .-zLn DiVۆnMEѳT$I"e56v(g =><62#bväDy آsE6G=Tʍڪ+-h'r%5 2TA2 5@J'2(E7m֣bb ͍+ IISL1. ;X@fIo_* Q 2=Pʢ#6:XLzށ U @ +IUE :ƀ6mӺ`fa+5ziQcM b %_T4/Q4T Z،P-EC9yygރ v[j-z5zC-*mP z0XA{I'X©rltEiaӨ/zBL iq#f0kB;Rv=B'GbN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'  9F9'Тd:v'd't)Fv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';H=99 n>N O Q1:a'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v:,|B8Vwf'Уt `'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v8~<^8RmqvA@KkuYw(L Iɸj>%n6%K@i%q靡7^0'%gCWתuPztz'>|f6tEpl hNWR-1ҕs*=ٜO1Hp ]PJt99>c*1";]Jc\TvFtE ]8uEedzq~>(ۥ F'.uLn6IePl>;З sd4EPk8bC)0]gQ&KOCը* t hԷeۛ&MG8d(4*496 ,9cTu@iJE3Nď궨[17& ~X -1ݖ_yp1g/ 7ajPf!gB/K}dq\o>Z1AATz&SasȻ{™8[ |n-nf.~k(V|ksk!~ǧx>3ϼ^htu?{Udԡm=]`gCW7ЕA(ytE(5c+M{fCWWGOWIGHWFhDW|8"6;]Jt /X 1"2̅NWiGHWdtjFtE|Z1uEhj'{ЕR{1#NW׋G_ z:1]=B 8fDW썟 ]\7uEh2z]E!cZ;2g?N8$.shy=-7sZoAg]< >'#? MKqC/SD=m 7|B+<_=ou*](,qr:]W˺:}w7 Wŋ?ܢt`rrC- t#lU5v)|vŽu^5Mn^X7_B6{T'_>kWL)G|Xc1(UGL׎ty %"H7/i~u]K۷o]'lxgaΡ=U}m{p߽θiZ<-/&|_=ޞb{m1|{@E{t缍oud{s_ח`8W>2^Лpǀڹ~67W(#_^_\NW/F}m[⦫Iҟ?% x?o^^\lC_?/oGw6v-EsoP~ؿ_^lڻl_-אMY=`H#8]`u'R)7y`pҫsȨ5Ov?lqO5qIg{۸WYspV! H$8m4-p[V#KI"wW$keY^rK.9 3!c05%a+&9NxQ >ެI8;8Z|8z5`񢫲֊myj;?jӁ_0uLLj9+9ZIoonc |x,ޝoֿ&L Y@>A6z B|'V߆I.tl0̭6 ˟?K5sMzf6k;KI\$XPCf*W1sӖU.MX][ݗmV2}$˔,4F;û]biWVi֓x )1T)b+t@HmojRsQ b-kհW\pveb"GBZBH$i^*tgеmD-:@WKaGhݧH ZXu#ZG O G,-z$H%%]:JpA`rW6Gkϫ;SMD; 8(i$1F9qm=EG rh{=gE~l)%Jpǽ|Q&C+DEQ\P'"D(s" #[|"M\C\u4/n]]C]+Y>Xްrƽׇ] 6j7o1j M:՚СbaG$؈#FNYe>"1ȪL,qƄ‘u"*iBݨ;yCh4BkM@G܁т#*$;A Ut4,soY_iIٸ*{u#0EĚemxl4@BhmbW#Y-rؔeYbPgt|yT|gr5L(A)XbrY$Ӌ.ge+ G?tC=;%d03S[!t;$ ūWҌ75$ e]A sfY (́.'˽͠>! ~Q޾hꛑxƜ eȵ((R)CYB{CUXM8'88&E}9c( (WKM Oc*5 <5kY(1rvܨXhO(2rY7mzgݬFojz~cgYi`Y4 K< bu!‰}UWhn+\=a)2:nZCBW_b>`8%$OTy^79C'j{b|sOuݐnt}7чQ0iJ>/=wћn\yE^ |u֍L^:I01~8vW̒'wp6,]kzl- _z8s9?o_~~}qw/O_|uzۗ/`4RK3 ?߉q9EײyT.hjՄ/mr5^EU JOg_ (%Rij,B1+@62[W5QP:UwT _b? P]څlcnnL񪷓$ 4H br pL'/6p"\S'`;Yjv8U^^Hϟt+ mz美q>W'3DE c¿T FC9҅ހ*Qf֙v_}IFwdl_ӰG}-1DVw9tCڠSXՆRE鐂~s. ,&vW}H^ӫryXAR5V1 )b޺P&] 6ؐAA<:1=~4B+Z*S qŃq!t֌g1O;\`lp&&❆E[0 - FJ鸢GAIV4xoUOi5צy@x*+'vҙu%J[ zbκZӈIpݸ~C[tMq=xR0CSo*"P M&~)d,;71L!vQ$C8(Ϣ p|  !kedkBK&T,3 (\=/zVZfWr,dY⼍us](45S+_͖ː.CZ_Rq>1#(1&3pϯ]DX^<.EzG@5% _[seY' zkyZ4?fb&"ʰ]BQe.վu6jޏ*3I87'LPUfiZ4un[ֆI&jR4 QPuX4qki։FHD47iB돶/<+nΥ}}U^⣧??+oHHU?{ O^Yp|y'N!7ic6w#7D/ػQɖ S2(Pz`]YdHQW|[?w ]xZShB($U՜ -'y1-{lHquZx*yӟj^bxg:&BW)" gu2r8/H{< ϳ/zbq 948m9\!r -%& }{Ćc$>~ MGyd͟cELBQ8TXl\&J^=Q\'4(Ml8})iڷt/툅rr;*_K? X5ȉR4gKrY[.V {(=U̪<3-kb@4"IkQ {'%G:6 (Pʍ"VFudXDcZ)" 6 )$40jѺ:$牻C:*#,Acg `6/-y$# qgZ8Bz~d _.ˢ8߯zHJlJ ղ(k ȖgÚzGゴY ' }a1"y LCX\`}8CZLʷ^B@%koT@QW -"2 (qB]Ph# *Xd9/6&:inO{^E>|$zhWDNmH$ T0E;x^"bW|x/aQj0<O꒱s)(vLƒ¹MwZzz!BƯ:YuЎ92[ڃO^PTrLB"+]2XS &.i@w#g6+s-Nk^- > `s>=Nǧy~?.F\E#晟%o}9f@^,GEXx'H>,iv"5XO*$0P8n*8c?9݇wy/*>hς|ɿ֯D~7?]؝lIxj\lS[8vsw{[UB!U _㆓Wm4.GdohN]*QM@F@w70ίmWG_{wGC8wːUI\wajw-%?;&czo}v5YJlp[I[l g^wQBڼ{^lZ)iaÝmo{d6wvoݍ7|ϋ~m\} so}ɚ\Պvy"="w ]k+ $\< Īj9 :d"NA$ mpA^#GCU6u,ZFpt7M?b<؍?ՈFF4mokaF %ak5( )/g=)o3x`ōF)umz&=eaxNaBMg؍:Go1^\n2)ٍKՋ^t^mAwKhY9@'֥EC-WNUNRŤg?ޘyc,}rIu2E4FT8VU`uDV 3FΎpဲ=&=>}$6dq?g{&)2W쬷԰.]7 T-Y R"M 4D]'˖&Fu_0?I]EY{@~>j*H4P̺ZJ)C()+r2F ~J>Czl`cڧ~}lϖ[{:T $gʐ+*:5h R*%D*K.[\lkkX<]@f JahQ(?w"O7rv(z@`Xؗ[{6Bm:b\󵏶FU}n(h/?f/oӨ2W2HXj@k3d,nJ: zabo+a$V40In7D<> bj p j9`Y8r%mr =ܢ@ɫV7յu3NmY :h06g]#ꙝ6Lg9FΎr6_:}/!K;+z raUbg<$3eݪT6-MVe"݃׊z=_zHL~UNGEZyPuZݠV>?Onȗ⁉(b9ERVl `6XYXT#IqtQK>8;nnjV,/^;8kL4w8ZO}NBח~Pڕ͔zɗf6}.աߎbƊljsv*XϜ_,N >w?0[e^,b{ը6wILq5hCb^b3'/oGMw=*hNtss* 0D5dghʠ떍t2ܔA-?qk]TZ rVC0Mq@#Qb;&[>uu3P"*śusEA-޷Zy dLы|Cl0 NT\P(t#gǒO!$)#)IbUBjAATX801Y JWdsLM`)_~Ce6*W/\ z%qk |CX[{W<+b t{idd̑8J rMFeT>H dqxT$Uq9jVx:!ǐg],{׾6ն*n M{A.{prwXm;) X2AJ0D7x~H9-R~ vAÀ' -" t 47_^l]2)Ē9*BLĔSK).Sh_hRՊL%\+`KVjVmz+-(4/Fψm29(.8b0 eUz+s[gPC]Xr梲%]S˺Uᓒ b}*sP:&aya9N1GIX3CD fUa 1Ud؄5SD1UɊL~烸Ca (sg/9$@w3n LG 9-{`n(= .y3.: (>zbgVFS}:Jmd;+HNt xﱠYlHTHqZ'BrUZ"wߘ2AEj-_Lɘ@Int#nPsS!ϡUprhppzJs(oE]Fw =: *j{d?+V]5*aѕ@$ FOF]䆧]]TI]=Cu'in`\245iF^#E!ٗǰzn)!֌}^Fקq^^n 7r|>^za֛Z^!b|qvRspNE}Z(Joche5(׼-+XYbzGj*VXۄ0$Jaxq5#0ܻ}ɰkN呲^|PM|ڃOwԕiE* Ƙ4` vԡxvu `Z.w9!rC5Ee0ﵜд+p"HĀ@)!{#qi{@k.s< _foݯagjݵ qfbeWN4΀^y%0/ f^եEDxbkmF_vmQ|rl@8dphF<'_,m[A2c[dU7%,Y\+kZ!P&PZB darC J ]h --_=?`J=Xy ՃU,\=ZՃ\jWϕ施""WW\_ \}bD)&_\NWWB3nk+w"+K+)?.mX=ԥ[vR}x;|2NX@2N*^W֢|ʤ ϣJ1Ohwh4~ݺ=fsi1mZH}踾KCz3)!ffDR&,0ơ23f193Lg},?B>kKrYZZOφWemY$.$~N.k岤l"DVKE6^8%`Є@ឋLkPJp 1ЍI$nsKhO.k>LhkQIAB"X_U6Epg$?4y#Lմ}[@eSF0FXal)X Te-w38S)qF;ՃuPg#6crE-|tlNRGgf iUhtNnxMFM֌+4EtV,9 v.rTDG4[V^+#,DT'e"FuPԬ" \:3)]QX_,# _)=Ef!aqbXJ$M3N~(7P}ĿCu竻ŕ&פ27yEtyqom4 4@]yp-9%ߌ-%5A=%2#s[djsA47]Ǔy:IE8l_qyvxHO׋߮KjN7T6*j1RH`A}\i@c՛o4W+*5 ލ6n/kiv7ӳoWni̹?̃JwlH ^ 9k`l]MjuMq뫹Y@L85EO<2Ń7bXY`k]4r]Zj .HXQh$_Ԅ_WHf+plV '5+?]_~wooN~?|ݻ.ɛ~soh =)F֚mO2@?ܩ7oPijoQEճZ=+lzzsk3<a_τvʔN5+Uzzi22;΋hm SziUF&xğVL|ч(hSAݔWlS^]~f dd:FC\+& 8w:8uk}bߤ8=fuپj𻛴 (hj}2i/pɧB <H䡲L TA _iNNNYS~ae@6f.Y,K w1}D٨J*f 5(bHU )OEpܥlUQR^xy90E)h&f40q2گėW3 P6Nj}6sgiǫ˿{s b'Ֆ ~prˌX=$Q1嚼R-Dκ)Kɝo+w>5ykn{[<: {C"o$7i[,- sײXko%mu_:\??ɮ`' ꬅ QaJX++,TQLYa saق0A/^ 8ͽDdфX9q^ӇVld~=R*XV*c\&,Ȑ5ELцlnۏk͜(Oݎ+=kUM=s|kWt7V>{Z8,~N4[= v yjVIJTև )9BhGQD*tU4 $ғhA=be3j^hSdz=NUzi=VYXf12Cp-L2'sNz m-iaC|6Ay *yv\)*8VЋ>E|䫓IꈹW 0vX}CG޻e-4=}t ŬzrXOb*Y3CaJop,tO/Ƨie W*rg>jb ;ev{̄QǽcoԻ$b SuMd.S6fgު{Pc"J,%/F:)ȢrSL AA5V` ё%hjn:.3NL0eQe+gO <"$tri*iP)llyfO'MJ&R܎;태LkdG9Ѳ̮I =<[EoQA c" eK|貶d)*!wDG' ښR&k%$`S&"EЃ82) dsoKϹ F̹*aak=cW,:,|R,3(xAn`0}Я9bs̈́`Xs1tl¤2Xx/(Ƚp&[Cqe2O)).L. pN爙EcN*n˜;隋7 ޱ+jQ[u1؍GN2T0H ,wg u{\+ y1XŁ:[%iM.n,g%4r&"h^9`M̹ŧ}zǮxh[C>*9Q>4qS\Bُ-ֳoY6?&mdJkVb+NW'z6V-.'=N.D?=] 3 Gsu~QBet݃hTUWL5='6zgc,c  56DPϖȲU UA2i FJL3KnCB; Bg.z+t$gZZ3g$IOF\6>u]A?l 8.ۼgi_>xҏRe}孯% g#8f֊sHYI~&e婯soC,giV-fp7CO/ϻq 00$0Jfn"g+a$h :F("AJ$:e+\wV; C F~dz-NKqKk_M`Ee9瞣L4jra ):TS<*r\87De2ꀧMg!<9I,okIHa5Һ(N)`eJG[C!O+S躏O5u 8_9^%>BIJ6Ah4cu`}]BU)9}5uQXzZ6C!C RԶg8ȖR ^EЅgZ"扬 'r9% )3Q[ŵI'0?t0p%50 i={Ͳ]hpw8=z?cI tc֏=g8ފw#yˎ}i{M%ޔgbڜ,E dKRTN7((Q N޷20ft^Y+mH #`ڻ1vclLCSi5>lOdU"%QXlveQYq|fTDk~oOרTN~ɽNE##BiRhVu4 cV?&9=݂@s;nʊ/N+;lG:O Ũ^PzM,*X?OiU5?}R/N,ab_{V;.%Sq^LgGpS'ϓk=Ow4upuW\9kN7WoNe\Dl Mq;]` Q ɉqtMЋ8K'v^49w.R>#(/j|_ ˹97/{53da"$be-};o:6Oj_iIJ#JzsӦHy:/odXΣ<]sߛGim^a2 RcޓF5omy}_Zzƣsα]N~مM p1}\Vim2X3A?`fxNLx;fpٲIW5>;ˆ5üS6FD qmB-' uaaq }HrN".8'"VYGhS4aA:FR)Y> -H,xߵ~qZL ֡fA\{K-ר]+$(nWFe鬾.^,.콮: 71v2Dc$;%=1#55r4h8HlI@-ͬv )iԑh\bS, p`_n1r6菃:GB".ڗQB,eBGFY3ڥ;/]SVӯ:~#-6]P{yQ\hV 8ɀ#@CNgXLybVI@Tfb$xȣ<\L Ȉb*^$b*N;lyKq& A!LT%`騔!\zD'Rȉ5#gC9kׯ@KN0jދѦ/lPD c4X€@L:SryP oJ›|G7{ROҔs^Ӽf`TKf@BȍAA-?;OZo3X聡(eh9Qm4^P G"R}BG;("WxbLR0Ꮗ#$XԜ"OJ-ZigЊT NN YрRK)\Di:Mkb,nmKm7ADYm~oPF S S;N<XW; drcuCmp1zu {)usbSu˥k CnܞrPI=Jnl9^(%PV4&njSTV\ ))vzu nhsImeiyD@N,A[`QPekyL@'+72\PJR=vA=N#`A qj1rƩ]$#涍=ۨ~]f z,u*z" D 4ޣk"JHtD1@ e<<l*c%~5Ne|%&m224CBƖٰ!m2HJ$1N$Zcc6M($ѹI0)9dT jo<%" "ć/q(*tBųv69V t#hޜ:ٜms[O'%pF y cY 2JB%PvpZSx鸷RPX5!)K&.A*ea)F&xMY*r$ ƂG 'q R$]ʌAXv!,=1c[S$cD!2M, h saSZƦ5uSbr1H迒ݵT'<3JZY* /6VQ]ԶFHQ6y]i;v8AHӆA 0wG )U(F=Ŵ(q u ZC+<(wA0AQ+ J,`!"{iE'|mZ.*`_$с~0Qvn jhQSEuuiۯ0?zEEY'rغ~w>~yN{nbZ8A,gX KH!eN8S'&̩1~izll@Hէ]u5n n'?2K[``iJ"uz% @ <XC#w&@RB ^ZT nVZcJH'H%;P`܆H7?L?ӿS^2ՑR^"6j=g<Q}K2z@~kǂJ"$BEYU qμK5 M'Su"#fRi6r QkK3J1&՚Btb<,w⬯@,ݲ;Xz9쁚k|Ǽǯ<}yZk2_rgy2O[Kq^=4/?fbVQ[I65^tp*Fxoͧ_Es_OP߿As>\Ip~|;vd~f)'P.$HY~JGYcHWY/ke;@ٌQ/h݈cwIhw/6xm'y1-c8vEsN]I=\VO7M&5D5fv5_Pz}ժ+FUBW7(Ž/*j qGӪC>}OhkY\ rDtx6M7Z_9#ZѠk E%:q]p]EzEzdOV&HV93Fꠃţ8ha{bqLIIE\y;Eg4ޞ*u` `A牖4uP<A=V/Cr܏]z]?&}l/_E~.uU..ޓ{&KG~:EۼDPeڐ# jd@=BJ!]%}B$Ruci9tt-.]'-.Վ/oZP$([qm$ aR CwSF nX/3P$=LTywiJ]vpaHEA" fr9L}_!T>>B ᨫL:uPRˈTR=gҌ+$Xkv0J+C%?u(軺B*QWp}./jq!V뮿Ijm щm?C"D«!Wowmq KrZ&YEi gٗx</EibiF^ g4c6ZLwϰȮb}UdhkϾc;;Cc oU;{^ݺ.?fn)O=ף,"QbvX!tQpK86O.b^z" (Xz zwQc ^YU;OQn%f.`)Ge1%mZ3}YP"jrE @!aйhU uLʣxwU_}jcp8ߟ6ӆWށ51`U&QK"6\x '*,I "Πey 7@QR#)ɀrUBVA$H=&֔RtF瘈V"`Sra fB]8[2(-ekZ$: 5%XxQfcjSEjYelH2FV %FmSԨ-5kc; )P.h)[P26 ;ܺ09dU;7s3hy`H̑Go6 XU)ud)t"u^RN ?{on@ q>WLm f˛%H!̹QŤbKd6) ѵџkc kD&dH.׊)8񞁽1ږ %*sɽ Mkb􌨲KRK-C rBYպ}J, Pk KN(jW(bIr"|R VaZYOs 6QX!,1cxa:{,ȵhƚYB5V5*H0l#*̔ َ2:1`W]>\@7%0ȥVFwڵ&64ݣ8mEk[3cs{j!*Y{G)h.D;A1Kc'vEo@7tKQĭv5xRߨM34VdoQ{,(vEEFTJ2@\ n-Zim3suH}uBjg4;_F|>eݤ.S9Б4co./Ywqa(@8 mjaV )MjH[O6𲺀+O60!м)h#Gk LVԶT2y+Ot}d7{l$5FR`?Mͤ.;5uYYRl/zIE-8_-grAlJ!`T̕> lRnMzbMXRQ--X$0F#2"VҶbu̦g!xrXanqSr|;Nf#rkxGb]h}/,Kv}&Ջ2q/PJ#ы)ȥSQhRʠX(5J ]߱]SڜJO9QR)](wRֶ @c oP)3lyU3+D.묒 N4E h"TFg6F D]gEDsqQLsh !OAn0-ńhr߭{#nsCte~V|i酘'c(V|r)[{* Xj@ ^ĕY[Ř\V {&ekE &ߍre:'ojUi5㔳*&7k4{j2`\PE=R2hLMsA=YoSrw f Fƨr{0}q\XۀV0α- 6bs"Ч6$'F}}*[*5u;L T:e4Cp\+\G'4I/$Ŏ/%Q@mDs,V|\?4<=ٛNFFZfU/u A'oy% nBء/=G2~Y'My@{g`W?v7s { otɇinGy.Mȿhqu-YDvDZ!-=%g@Zgfmf2"fN(3 YٝSW A..{o`0t}kK<+/e0VNXuFuGy tT9(A{"0g\/0{Sd3f::$Uvڝƚ[381381DI"H.+W2THUUgdq\[ Qvl6bSXhx%rDGΈ>d.:˼i.̥'{gUf#3 ?j}2-UEVS d"lUn"4b8hB:2`l]#$F^QckmJͪ廓ȹ[oY߁'uS4ދW]4XؘeLGmճ z3 zyK=Z z`X%JdУ_}P.[ GA㓣o$<^(֦#]|'6XJ:n5"fx5T P%>Rޘ/D*sPUE\`W{5 E{}+ -rb[bH#ڒ.bmSl/F=,!Yȏ ?98\ٱs%?\e?5Ђw.0b~G݇mo/E3c^k[8=ݗU)3 8-ua!Z[c}y!%(M m7n[l^<_l{=%N -ի ox9 'Gm}q >%@5U `Rs5kx6&R&.bfWRї>䅳9h6}ڌa, ԛh< #{ [K~<-h-&:obhC[>,[cu ])tXD'b?Z6UuzL%l.ђs^1͐B Ԫ- 2o}'e'u:}>^nIߠef2֗ߝvvt(YFgvCÏPC!AK?ԤɁQ%aae% r1FeiMzo\?.~ltsL6DPT&Ʌ &9d\LXk4gR&pN-Ru`666. ]/r^C*C|͆r(6ɋ|b3)@'HnS4B$WQ|%dU[T ~ºY ?W!džȂ90S%1bmsLk(lBwwWAPy\0T/chWGa$a:hIjFTKU99 %Ze2y($^ .#? ۿj Ye4E'M'*N;ّ S2Kkd =j&3 RppUQe1mt6d/dL2Q&g@Ku߻2ћ}<5kOC}xn'$N`|Κ7yvd2ȀG,;bjp|3{8k○Vw>*W*y!D(0R}[z h/@sʈO*svvɵy'As|")(UsH:f8EV ~ˊ|ſ S;Ɛo=]'*>C }\ӓͽ&n/مwub/B/Yݟ-^p~>=Y8zqy'TjHGB{H1 ,E =JVbB9=67j&iMsEQ/W2>3?Oe ~wdt_ǖ?juXt==a?|~l姏?6?DAVs;HBx2 )kpm}VC{ r׬>BS0u]0hy[af/<_W-2\k.Y#e~?{d6 .p^AēD2R~W=3|HTS"1`iTOW ͉~U>J".T[_Cb?E`._.e>6H['aO7NXGP*,k@nhf:yk::lUg'=ayGXyG^8"Y1 u(f:>BșealuUyl<񅔇gwKY=t YVu;5穁IrUsn*aS (RdKJYDŽd/%+3>zdX@痒i M,Fd;i- {櫓#d1FE%P4r2&kG E-hZGEd`M"X+\$JzIf(9ە(\;/+і䀘;i9v2I?6yk <Ny&G"MG-A$KNJ}yr> \ӀQ%,"T$a Gh, X@ 2!J0ʹ7ge&#QHYu2LwZYJcZFL&ZFN#QZ*s vƜj³`+qxu1^\d2B=j G΋7 -`M G}Xtj[Uc7[r; 7 ڲKq:] mZ(t %i)TZa-q}ݣQT:x Vh_KuSUOZ O:Nztv]HN> Wfh?>Dd͵~kmOdgTDw:PFnP.?40oo'Utrﱓ&RW͌x,=ZC޼&0w/8}ؾYj{|"ÇjnlO^˒k!kǮ+mLFҪTcg+G&ya6wzV n>w19oACZO˦u-lзf!,ernzhCK-7CϹ́1Ȼ/`ꖎ䱋Nֶ4 g9gVw mmtsns)ʫ.%/a L'}`OinDV%'7GS^*WjCM$ѥ\4R &v.9 :(p^z1SNP0#dT)&Oӊb`Dq㩔[:;6k S'G}uBEsc9Fa㌐T)I!Ae  VCRY ]QQq!,ҨJY{,VJ\fAFvA :0~w:d$q}Dws}WdEov9WE ! 3EJ`82RJ0G 1j]VdJF#Pj|E 'TX-&x#32z`̙R19ۑ1 qƾX2c!Xxo7g2Mj;:T_~Ӡ477w7/Ĉ"Q,a8F  "zAw6@)ɨp kHZr]lR!i $R:"*\lGl;%h4\ܱ/j̨;4؍IS 'H PCJ{ҘxxX)CcAl/"̈(:DqS0Œ"J4[ReX#*ֱͬN0B"C3vzHL8cu4(&LB*̥19U,r|ٸd_\qQv&G1rB4^c(`۠Fjװb$V4ǂٸc_T5 rzqE ,Hxp<y4T@h*O'B%J[ zŜu1tȓynM彦V3#KMFAU$y]#h8x0ѵ {&?UI}6_8;_#U52CqH6>K)mvg v'2ē>nYpD!11od@Z1rH'BO& 0v00OhZy+0 YTEkH'C7?7a{;q#\|Wp?ьzf˗%0+^qcz۴ ?N'u_4Onп}ſRbmXb:իwyA2E LI\*nly2v4p +`yҷϲDkn꟯TzIrBi^W\P!)4qϋ7h⛚kzPL{h78m{H^퍢I]IaO ?7^,@lu ')VIz*#iN?uxO^Kgscf)o.^ǗgiX1|[<=Mho#620sBYW;op7*E G .[+id21Z.HK^"z{=^ Bpa dd,9dSyZ.zs|/LXȦ|m(LPI&![nצ4nhyľ!9.qָrcAPʒ^wZ')o\g+O#,?XsG~GjBҏS-2 9Ma龜 rg7Î9g]|6u,9&_h_Pf+'gEbnsns>"SD[vF Gq)DH 3:N7}udHˋGʴ)!+ p3eD Ȧ<)5!u[Ja'k@Kj:z],0w68hy£Ǫ>PJ?'u簟PJrRV^-e ؖ U`hi)1H4ݹ)n^uv#;DcfAN+ƅ` Dq!@ҠdHiO6y;qeɪg98?(Z^4}7#:wTRA ƞYntΛ 3+ڬ٠v37 _yѨȑcfO*( ^?_j^4M^x Wf7~ G @yp_߯N1\qx)8j9,?R/X S2(PR k9{MW̋^cB.oV8L{$mzߤZ߶q7/yZE'jlEo+^7ojB{흟~&jvhw&j(}}0:0{}P7{|0&Cpy\zŸ^S* L>JRr*pm'KK@W_\1F'W`q:pȕT*QıURBZirJU"X\%rhWU"SDJ: J!19!J r2pȕT*Q* ꫄+-4YLtҿINQ=Eew<&KxQG 8H7a5j鮂.0){M܋G fI98``"M vvT*nq]^kq TPr.`9VNƣqI]ǵ)0:AQ}:EUqo ^10 .fdrHx* H Q̔f9,V0#*)dFRfw18\҇ܦ~|Bހ;fuxSPays1gS%Z@@JF/ai`S4a?j"$?fwtHmQu&v;8r ÷rH ƞ|>yS~ɡzf"l l2ѨB9ۨvIdo -;_/~m|?ܻkBt]a/0?])wobu]7?C/={DWB.}\8J332҂ ϲw%n|;K Į+C&}Mqj 01js'yKQMWz&$ FFW`fhQ]WTk6],<,% 2kEWeOk(PWl|HWzTJq#Ϣ+2ٵJ)o:C]9+36L+JuJ6 u̴ِ`kɮM])~J)tu ,Dp2JqO}K2Z׮+9J%&-ͳAq9̢+вY}1md8G]3x'MJq9΢+_jWʸ-PX;pִ_4ra+|"R8uLiMU*^Jfߎ]cMrؒ;İ4ZFVK@WnsRJeW+N O+<W,R׮+Pz2PWA|5:FWh])^WJ6]| T X՜(Lyok<L{ 閮]ab9kvW1{#ҕtDbhm1,K JVX)ohiAreXDNE,=-h3jhMWzr&084R8ȑJ)6]sdD=ipO] .^2ɦ39<5s,])nYt/J)tur4Sv`yt\g,RS23ԕHW;cѕ4R/}ul:C]tb7΢+M_e0[1xIJ<i֮Tw.v[j?G]`iJ<Ši+&Z}1k;eWU2aٴ+;._Z\"~w_pA==<,VڰOD2ncM'v}2! [O[kOe.FW(v|\]O_|3;Vu6]8q28ith,RZKkוR2o:C]yHW 4i+ev])KPWCH 4RS?wm4kוR-:G]T 8y+ŕ0|J\WJi㦫3UVp`5RF3@KJ)Y/z1=x&O|w߯nnaK=]P?}P迮wG_nkҾUYwU(ڗΡ`?]G|sQDјqqJ<^zwi/a˫;{wˁwOx6q~c7TaH?O և~Gŏ)}qw5g=Ĭ{Tͧ*jG7ogg4ˁq5gͧs6gG:WgtgWq|}) _o˟ݕ~%l]ff"B$ؔNGBߊQ޿Ͽ|Gcćz궷_ tWoZ.;2UcВqSRlD)ώ|Q)`Y[dK.bgRBR&7z%lmlㅍB}kCM2.ғtc?/4HP[s҇i6Qj"qhBVq١AAZ"]8&{Ghbzo%.3ʑNђ (KbAZ4m- :Kcx_[dx-5;|2y.Ȥ||'1#%I#sOZam D3rkHh.Վ 2:GjzLM&AG".9R%S4J#haѓ\ F4ن06[*e`K~ABvt4Fƈ2j\sՠ}hxh(I BX^Hh'3:g&K6Qbdg\騁NQ^`Tj茿%ZD:=toD U2\-܇xOJj6S4#Hn$*Q ku}o)c&S)wv9kc:&~I?CzT1aM(VÔ"iC ca=n}4AВi ,΋kL 5Än$'1ȃJ#7g1 "r;kΟNↆb1;D{|H]Zy;0ݑ5Y0_|TVFDŽuR0$x:{4,*udtԑ[" 898 |i g[ Q "uٕ<<Ī%4mJYdp-&/Lu8$-#1j9KYj͸д{ǔF@_7qv4giFC `TW &ll(zC Ykf{C GSc wEr䌫 s6E *ʄ At YG1*(WZT e\iZd )߈rv1%H #72!]=a ܌T ki?;2c)(G(!HX( @ ,Hp9_T0X-[K!=1 04Ǝ#Gr 18WĻ dgEׂdT HaX_Aw8 _נJoH:jobE,d sLʘ¢A(;j:#wBbz@Ɇ4`BYG!(ĩ+(HŲUW`*1<3}Ԭ IIk YO3cK2bC u LL=%cfaCB $i.ӰM6vjq%P4p":30~}hł4$]1.arb|Z1f(TM0P / ~ۀyCc .k^wﯵCV08Q/G]0HmFLF: 3:daq>!K?tl*u?Tt56bf6*1yLC&9AzA  .Bp(z+4|ITӚk+Fau#`tKa)jGt/1EHY#e얫6<&ފw`fm8JǗ>N"1[Qh{BbyӝxaQMZ^ v"0MHKn۷77V0"MK8rVl%Vy҈h1eD|$_ 8j"!&*#5]%/I%tpkR EЋi(ʐ~a-umM3*ZDph"tB PJ+v:MbG}cex hIFq5-3P VB;aX6sTI iّFJK)#e@~!QkqD&"rۆ7KEaQӰ ǂpN0BPʉꑭI@3+GmY4gל&AԀʬfMix6Pk [3+ U829G-w;Rf8 ntp `F?7S}08'zg p lF׆U" _ sJXZ#e[1SH 1AW= |n*"I<(Τ "t42'kY˷An@ xk TSSs]IU ZC =, ae%k-th}X/pex. |J nX+`5Ƈ( ƞzMq P:A0J p滎`@ \4j9͆:TQWUjKG0]"@I-&H*SXЋt\4pQ~5 OW$Xwv#T8ØeT/L~>j\j/&Yt%R4 fW`Do,fYFx1Πr?;}NWL+gO8bIti2h6/>j%h`'^Y! ƶY0ޔ PSJ^q{*W9=kfQ\Z}J Ӥ:E%q R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)NW *'%z̜ɉ0r QzzIҕ7zC> 8^Lè[G>C_磺y-"ߋ n}h-zލZ W\ Ѫ+Di ҕC]!`㳡+x.t Gw(9]"]yfcǚK grF;s\F4W!\MuGUJ c^I^n{~=/jGa8vDDTUw/}{wiJWl s+D+űԌJ2}N6FfCWe]ZtDWHW)]¸/]`l6tpE6tht(&:Ax2+,<BJBW=]J8 ҕ0]Ft]>{WW~ QVIҕLN[X|+;U QR0xtvLfDWXwp ˅eG^]"]y#|Aˎ.E0E&o]?|.%e7|Nl0zCzq rmmeߵL$v mèl9*ȌuI )UZ*ϛHUu\inYSgNox$Dk)økۗ==0&Dۥz"\͆yoxّ=6DwzcBw,h CV=Pn(]w9ZgCW׊\ zwt( DW'HWBxMFtB jɎ#:ER wM> uxW$#JAtut6+lУ9wDi ҕV\I];]!\M0h>vB DWCWFidFt}> U,AD+~ QJ:I뿖ݼEN*2׏kY^a UTj++.D2u>=Ɵ]VEe/&*kG' gP?3'/u;^_1.Ӯ J><"#oxp1ho/l-(mN mժ8ٙh5AxJur.Ȍ Z e§X7+ U>&@a:}8&^yNhkyH j,tQUWIrazԞ:>04 ձ(lu6]}q4 bqiq ,'li?Z^w|4l}a񥧳[lT6a<_M /|UK>"NN<]ZO,?.dcT.[Ǐ^09ߝmk%9,:", uUgiWƈʧJ)08O͕ G*5+n*m9K1TapVnWf 0ҬL&M0IMqC =ބazb>=LݟG_n5,?i7On#sx7El4/ ʞNqn"];k2t.Uo:襡/-u+"c y[d֥*i n}2o{Poъh=CüYǡNL!gUqZ"YoZ&fgJhRHfi/.rYMCuuz=֊sXBڕj-G7ŵ6-.ۋaT>|vl9 +0` λ~ݷ$`jHwhչ4zVU[kR.U#WWi,/a"i4n9 z{Whx`v[:㍵ v{Fnu[[m瓴#Xq./ӷ|꧷Ox ?{򧧰z`", O|QL._=_\zӧ}쪷&+REPlS0X.l_V3(4Ϟp6łffi\m&􋏚։mxKвœb H8mhV9T7edwBӥ0W [Uk~ԝƫUHIV2t̸RsPVRgz{*Ͽq0L'uݘ۳s_nMGc~6nNVkW}*+glldg %+mX__،Ax`8A'}T[\$áAV %e)S,45=UZBxp%-#JE@YQ5jwQhH$Ҙ| jcvJ-$gJ:pϦS) ΒVXM*Fv5 1q69XӭjMoTgtr3n[pnkWpݾM!/* a뭶 GA]W5 Z׈k=oߌ|^wsp%AA^DI@D-aJZ1ﭱ.Jq^4`wh[Dx;Ky8힦q@f![[=J-1k6跽}웜JegSL$Se*-c.=;fqr:fWW afˋNTVU^ G|gIӫ@1NTNNdN/eDXE2[qQ+I1|$jSx(4/jq))\H^'mmT%2024\ڎ+Fv;c:&nϓ9sŴGw6,|Hlؘk7RLtx!'|ڨS@Ց -J*V0@'X⽪sxƎ;9~98㋝}.ZkA OMM1GId pQ׳ ^5JH*kXD D!ZY^amxj}23y1}GtkTNdPѫIV޵vO5%罥Behu98 6yfh^M򝡢t( 7o#$dݞ14oH(tnԤ2/2F *"[l?t1&%f\- 5:߽8jNGYOCUz~kpw\6=>$X#sL f]z+zܧ;6k | g3};>t bԁ&+ J)2Pr(ҋr!XJ"eqAmYIJ@d24@B4nrUR/ךPXP]P DfL1>ٳ +\Nf nrnOw@>x墨.  a GI2,ؤ҉s Ϡ9QDLY6\bLH jI PB聀:1YϢhRNC.R U*Űg싅0 NsY3^Sm~_U~v5_8bSE#\M)XBdʉ@u$hrt >eP 1[6$p/Z\DFж#&&,RD6KdḶ`KqǾ- Q`7cL-.@`0(Vkd{d^Y(/[6I7 Q3A#D A1`&(T!x $# hTv0#g;6FOcAb/"ˆzDqs4ŲR<7?z)rI20Kjh) JH)ڎKg|P`B@K EˆX]jX/"YKE]u=.nxd2j4j"߃O=RbrRcbܱ/xh@Xgq ~aS7я%َяPP~myiY6TTJ*" AU;#txGuAD=m4\ B~s-[yNkp">g D aD$`F6F\2,`@Z L0ϣR ٧85>K[.JYŜ` j"Q z-l5#g ϙw&o6c +\-9giz9-Oq'B}[rs#j hTDC G;Py @t<{E$ZˈG B厍{`3CABDSHi.| ^!JD"TLپV=c}?s>9қ5pCmnl vS/x~Dx(uxDIYR9C#v$0Nk&dW5S|2u,2%&_h_Pr]apnse>%LrN&>z/]ȍBJ)R+ aQyFR@B޿SM_Y< , qu̸,S#(i.Mȃk4(I#X]ze@/s`Hoc.Ͻ:ZY'dd(>{S6;\Ԑ<uYZO&@t^h~D /!/_+pK1KûfW GO?~.RcGr\4)‘-ܻ9k{ܦI}1lS_|%&E/dE(5HVIh7~N]ej{@>iaqq\RZ;cSb;hag׳qh: ~ tǔUۺ5Wnmк u: .}cGzN81?SZn}lGr+*xiQ֗8dGW@OרhY_ͣ;cԚ!{5ͽVUZosr4 BIS$Hi̪lFKk!(TB% pփ˘f ,E#U /Ҝ J-FqCQH(!k2Pឲ[3ڥ>KfXץuq1_l[|엫'ʛDGɅ1YT ֠P)hg Oi "I\.U%aB)p0'{[ $ɸ |2 F=DzCsUIgC^Y[4ʺg&, f-*$R8qb(g\}lq䴁Qs.NI_B͍BNpA QUBM+U8//78? eRABAʹ|d(fĤhTԫZT0NH2c* (Ý{pOU6Hj 7pfxHr!끣pU$B134LUNoI g5O.8Ui4'Iwmm$}/7~:8yH cԒTl^$R&j?hKWiA" :P[j)eE m7X7Нim*|-%ֶ۠XQ67IfIL# S S;N<XW; G{cUC!JGIrDD'$+Vo=\~dy|H,EBBɍ;Rn֓MVnRJ ThMT,UrQRRZKo8lJPJ*))M7MD4|Gn 'E -A0(K(vkyL@'+72\PJR-vA=.#`A -yΫ &m7{8 U/fY.LUREh!2 G&Db6=D\h&2FЖPՀ;m=xZE^1dav5{T܁Gr܁aFKn@qRB U+KL7`k x݀;"܀% r.)ٖgK36蓉눥›(p\B·k=r7ILp>%pF  cY 2JB%MᴦhY#qo 1kB$R(M\XZ{UpI˖5rU@xb4e Q4r ^M, 2HJvAGo(3 OBX:cc8ZS$cD!2M, saSZۍM_.%+4h[c';\{8㘣T;ex~ځWI_ q1M-縲KպB#*.ET|?_80Cf.G}L[:}l@RBi'W׍{fV|?%NB :J1]LjJkK)X"ur_b"! ;c$:#F8r`׭ jZ#gnkRJ6lGzr~olk^.<͓kTd8|JnE.'cAGGBGPFZGGkU;WFS |xB\ؼJ2cHH r-hwZCw}쨌%&{_īh["T$CxAz>V&6҇s<ŵG "ALjM<(ÁDVHu mm"e9[zz׺:Ժ:q pP4aíJ$I!Z0S \-S rQx;BfLʁ@ۣFn> p={Niד}iVPٛ왭ҒLjr4#%^;T!(ʪR$s\zIx+^3v:1T$4ĬP*Fw yI{ST)Ƅ\Hݑ5r8;h4MMNV^t>{OMQۘŲ*,POx|8T .gge޻=-fM|;Sx1Mu5j}H, )IEyi#+gDG*z ܍p$Qz 2G\Drpyji#:3|B: SLP.[Z2KD`ʼ 5ׄ/R۸x}e -ë^l&WsstճW/#hb^ u*z9Sz4攈 ) ) ) ) ) )p;nŲ)J-t (J-;z@ft (t (J-t ( :b 'ZoUoVۥv)] >sH)]R/KR~.KR~.KR~.Ζ% Jv1!KR~.KR~.;WQ ABc5+/h,qM)Q+{90otwDa1@hi/fl7j0gvB, c9I,)TrN`8X)3 -`"D2b1UI\h&6Hp;m<8q. >zeh heFΖn!I`8ŽKP2'U菷 _<bxj&NW/J"6|7T:WnZ`N|A$ɜXJ(pG~r1هP>!sq{*? XKaeLȔB[҃LWIiTҊR- B\d P ٓ UkH7㬴ؽrEX 6oL2& ٔJD$Xo81<9<e0b(}RM *+QRy/"4$TD% yDlj.&hJɱ"H RSc0ãAp ^&TZz4&cp/pLʳ3GjIJj5M՛<9~{#?}iȉg%=w)w3̾ K]48gPɵ70sCWNsI(&/1 (qk ?~n~O/߾]]Tu#[,sC/,~F1Oz~?SKUb7 tCB5lcZѝg'&v6sijo p22dN"ȮT%Mh`6 /tr<;(r#^FGKK8!I&Rb[=CJԃM5e䙞_WU~/hw9\,V($O 'H!hQYe (tHAz7x)0#~̗6{v'öVjOwC_0>_ LZq2+c:Ȳ@Jr XWZ$xAIט"^2* !g+a$h :F(bK(蔭p*0pȲ=ta wD)NILQ(*C5r)05FgFjF|<.O[`I)m ?JR /t :JhlH> ,%,fAH ]Lꀉt8%ɠ%=:?{1OJ5} i) -x6:FmN$Q[Ǐ[<8os0;NNg`]웏%e>㋓?sO^IJ|xfקpo~rO^sv$9#n8O;G,;yf4_D-6ȹ7ɤcv`\ϰhgPr1C[tѬ?JuMX)Q '.noȞX GvXW'}rr}%?/ 3܄wtx|8J'?u=X=fR-w<%6j' F3_Nu?"Wd0tBlF[;NNH"@;R;9Qߞn:=h<*qВriެ_L̙e" X2 *.فد/֧҆WMoxAbWf'&yRYFσU$"lTWZ%aQ3%%Z"gQEvGM{m0_j|3skZ3qA=;D0 XfLH!$J@A-D:_e%I/D#9[Ex d,\cϬy>7 o3j]mo|FRkoͲp2/3k6p1RgZ[I!O͎R \ƽ5Vr9k%"+D+*YξV/Zh8;Ѳ"!)RVDc"™, PJHq%h-JeE+}=kJַ 9qIG7[w^Gb~&S2mMu>~4Su~^Ó:it@- rP&K8pRzN8?rGZ: )%_iaVEE)yrPzurɡ%yU zDb eHч8rYJd.s>'&]YFNB #DhPIȢ㪄:Mc}쓍4h5GB m(2c' ^0 Gq#Omv!OXG ^23|3Xx4$lF\*Oi\G؃׭+,`EO8?h'oXT:?l5/;Fd TDuaܴNQ,?"͆w0 ޳hwS4R.n;?Û^Auv߷g9]cm5US?- t%}u~ n2vtG{̮eɇ-^ь;Dn$ Uk&5M!DXlyetӍ]"ڷy/w:{o5*,g^g_ !B*Dg /4|pOwm6@!mݻWNξ 'LD͛b4`H $hC$ %Hju8d]D/%"HH &JfceAFりmfQIT]I$䡸 >E+WEITHHFj@H}y68΢)/JHS^圁̚%(u4VY^g+% LPF/KnCBKwy Bg.z+t$g*k2r|l&(Q>|xLuk.YK1;fc-8@B$ܫsֺNB j ZUt< ;wlg]VME~iy [ X# `OSxU]T/i\%-aE7Wt/~eW1Cw'LN1%BrO#FJ?Pj|b2&F!K[W[aR yʁ"0Sk%IGQhH:fv"puHo ?3gЖT1$Dc e3s%ػ6ndWXz٤NFЍu|TekOڇ WkTH*%QȡIZٜ@wFw(abVLVhTLZA0vRk'ͅ+6iibVH Z9IWTe02q!ӊ GJ*F(o%B12B5\qQ`fY$ yW<9Z3h`YH_EE,yqǹh9v3LU _>r@ȶp9 6+2Fo FIPO#VkX@ uwZhN<` 6!JGeW|NrQo\o}=RՏ8/HmHȹY@Bw8(_-B_PH QJk*4`+GCuQ ې($E!)6 eZ'㉗LDͤ6(eH\ omtY`*XGP[ S"]@A%E4ҥ(x81r6(:ܸ=&+|oΒ5+4dtb0h12mb" IidzI`ٔE|i,KeJ\FYC$+0i5@9Z.9,ޅ xLѪb]ʊqfr%B) 93s&5ġtarc$C;1G˛dWM]LK'oatv2r qQ\ՌQ>C+c$cZtѩ+3:cRq$!9CUJF䭌 S(-;=d}V= j݁r8/h dWZj[u2Mg*Y }{u=7V< 7l3LTlyu1m9& |)\smzLeZZc$@KirFoY QSrWbfJq!6)1>LBdAy1bXg0y.5b4ј >mm ȱ&OM;\ ,BW'b&914>,V }ˑ1,tޣK99$V*щ 1sfB:If'tHKa(igd$[Lp1ح<ٳ)98AjvgZ܁sXNi17ٞƋ*i< ܀^re1Yd\EvD#I~[|rJ[xݨ{|qμ&Œ1@kd(djZkKRgsti%<0jXf O/xkb&B G+=ecߎ4FB:"C&l z uegEΧd:CnZXt 0ztqlсБV U-k~2~ JB^nl[VJNH]QWܥ(#WWZ Ǯ UWQ]#P"%ɨB.*jvPiT-T7n;z;#׏~injsE\qDΠ:c]wRx_MeNvg t4YjwG3Vt+ A{,C2Oъμ` 󙨸(mL'ƩWojV*ڝ3z߽ohODSEjCyo?F-y" jUd:$$71[,˜Hǫ1m奷4Rdv]vMt *'0qQِ]sDy`btW3P*uVU\cY]ĩn ,(|fv]c<@w-UM^v^/mԛ5Ѥa򥄗 :uH3#FXWu})KDr޾߆lS:lOǵ@OŵPk\ѭkq-SXk.Pq$r7nF:zN'+hնS- ' ʜ*8uEJvP[uU9֪RWDdU!WSQWZ Ǯ(z RK\iu4's톣–t&K]Zs=jї ZǿQ|4Y$8DS!)ʻ(+LwTpN͝RT:+va-`e" E8~yˮY2j7<ރRҫ36>:2%=S\8ʧ( Q$,jr 7͒}wжsE Ahus\Ѣrq׉œgХ7vty.㧗/@CQop]W9շ@ -A0'"yg6+03((#" mF.ϭڒ{sܦZ;o-a~?[M~vWЭUЍ<Յ%= QgV%^kkG!o6o?F.[g@l{sF&۾mF~j*\\U|U\h(~ R-ՐT^Ewm?3=R_8;9@>\ifWJh*U!'V!܄A'QVd/ ZEPUD%,evg9&+bgG&#frl_ޖm \:.%ZZJ"DDgyo#r}\bh]JhO7U]),AWZvS=^=@2az^T|ꔓ۩ǫ鹂?Z:{}udMdn~>ה-gyfn1׭_V~_JY֧ ֐YW+1s,T~x*ś۲- ZL6zPo\+o» ޭ2W')0nޓ ^ ^oxtki]ضH@H>Ji'3ڤM 5ԏ׃.iEY tU-zVGQ#?N GYJ[Hu H%͟I+j+8i`u4MDM8_%ΡBre!d8SNJR[@FlO[` PT4L$,8YVRAF rd RhEHcko!˾cu}`r>ة``rvqug?vkRtƐXCU5 ;?rQg3㙓jZ(Dʁ "sh^d`d9圀.:jH6{2 BȠP\9K8+`0 ;Y,9LyMQۘey R,Z/ 2*2'k^eQU*1}Mȣ5jeB0!6*=7BXL4UB.Ȅl^HP^IJRQj6rc8e#c fTS]{RTAD/A0\JgWn?l8Ax&ѳx/ c?a c䙮2i!|pLЊP~&9VEkcWV<9IHqw|?h'cpܐn+[>ۊESs&5V(Sdi,&He`F'R)R f¦ Qɔ ɡDƫ`P!xczgȤbjZ|ԖhYĆQfclQ1צG6nO\(醊I:vͶ/6Zf5'9~ճRke+7Zސ|Y4Qf%M9i|ʴC^9Z%^ U݌jZDža*UZ-P1 'c*dg͹&H!)E4*qOǚ -]ڎ'M*Qj*дicș)tћA^m"֫dbaR/4m"Hr:*tJBA)eҙH`D Bu) s^Ԭ z~h,k *"oT X:%]taBOmr(?LA@yҢzNJh5Bw9Ϲޟ 1 x׹_Z\qg𱮪r&$oS 0]H߾,Q쮫avd|<# .~Q)ܐ1P2$`J&ίV¹|>pzޛkKrX.1M\ srgnyQ(ǓB|t~DC^O;׮V$Q}56}_^\x* Y 4:=EMk\mb&Z+h IË:+}>ݙ<^kvScr7.14Ή>u׽-.9HQ\>kIږP҇u͈,3[ĘZGQ,+<ѢYUnuu+L.ZO~HiX`Xoʝ ~yg0qcJxॊx˺N^_;_^yOo߷__ .뿾xF`BIFYKݙDe 4=iTޕ6q$ٿuuPǮ5=,+ѠlɡY HP")t:˪<~[s#u[Sw/|򒏼]͒Q O?]4R5zl qWPlT󋫺ҡ뢑ꇻRiҞX4R@AXՔwXƸuֺFN=/f߸GA\GݦD"23w mHDI˃֣{ayU=iaՎ:.Ą:!&鄏JsEと;F\xA?)rSr2K͉[{ d띆g[vhq!;C68y쵳V;O#Krãg>˞Mn,gsg)s%Z|d6U@~s>< Q~gYvY@!!FF4F cq3$Zni>>2J4!L~r;xѠ8/ڭٝ5*n \{/m\ftx*,^)IRRùNH{XdJ 3(} Bw+rC G!yӑ@%\5X)4K\Q"b2̲@H\s6f"CC!=b,9A]`8[D{X|9P#gi`w+jdھZ"Zu7XP+gQ+JQkRf 1X\TA*ghϛh>'pZ w4 %{)<,гzzڐ1'r\ߠ:|Sl2Ό")!XP1 Р ~KSy2]w GTX[ |bkq*/?6p֣1?ſ. a<:_ 9  Br/_V&T:4^,VF:_Y͠`":·/gY1W"D+.TO {dm 7df>N;fW=tMw=d fŘ=ˆJQXݔ^=Ͼ .41m}?>v^N4/;;w9{B l?ZQ5ޛdhoZܼDRo$wɜ~_<ϧ]`2mg a7Vgn4ߨh:D?bHRTv81ޝo:#>x2qpO pYBWQ?o⵩pumͼm \ uϫZL/]VcyѽR}ĸ7 +¾Mjgp:d6!a8At73X `u4TF[FORX {\d%ӒULȘW<0G\"Ek5j1=G$u#5AKo#yn)h3_qs,9$TTcCeL !h9Y!<|,$iEVTܳ9lee-yhKp7f?2~Znq]BFɅT! nSxb\]aLkAGۊߵ*A+'IK“݋٣n^t #EN^m/WDu)6xt'kAyWg{7_FnFͽcqm Z\kg*?/iW}紡3lSޮ _u;MLt57ZCw7lS.uĶ|Cd;^lSe锕EH }s}"@D03.#()27e*Z&h`5w=<CzWTbscp%-22# Hh`> CLM׮%|;Uʹj|I I(/$ 9].UTb{kˍtWͽ)[Dy;5.14V37<7 '+홥,-7vub,Ҫ<{ʝ{Y%Uk+ሯ" zjbՈDzjc[5Ѭ/e^XE2ቸA{Ov>E~he|q.]A$IJ6 H20h2&$dl14\g31BĐvL>xww.KI,|JHblE>OD}4c>q\7Kھ`Jmi9PRE$T+H`Tg_yفm&Sɲ~IG@nwb>D V-J1/.qKVz+w2 *rD=]8pz^!B/weD#NR1C89'@xR2AHV-J/5P21C7D/(18-(K$F)R!)*L/S PҞ uxc h5TfTT$WŁK.`srM\^@ZLFȐ08LOo.vP[pyLGdvY75.q= *GPUyu?V)lJT~S*_]|%烤Xd[$3~Rɠ{-d<$6~1ΩF23A~Bk p/ۆ Q"Z>37oK `mQ|L LIf 1F )!{~u56J#d6:/. 2*eTr~3.WL8SW/|Ҿxfٙaq@h^/0xí[t!mM7Y_9%w} Y[v}CWҍw\+Z1KWM]6k6MZ( )[lquն.] ʷy/x㻽{%2,zr|M&k[\%-<IK=7t.twSmMSs;ҾL4;mN%촹ؒOw,m⺒,f lq!L&TqW lTO<ȬKܷg|9IpQGb00!8. %:|\d78S"ךRoEpM}Hv7K~7A1@~ %GR0jVXzY%Z gAe*Oj@d24@JQ4n&&VblVThRpW] ;NM<5/Mʶam/߀|VEQm8#"d,ؤ҉s gQAɨBU5l2:ŘPdIoLHz ǒ:&YtMJi*(%XLXb3cW,PXW̌W܀?a~^~mʝO~?".g,!2qD:C491b:2TٲNņ'!{h# \DFcU)@"ANťٌ~27L̃afsχ֣7`KH2 DAl F ni 3+(&aFR22C*Њ4bMfB8$Q ڹxXLx͏]QFD#bf LhI3TFbl)qq^x.^gY+.¸{\qq݌')I,^;m.F T= }2֫͗F'z\. );t~89}7uNpSy?J"ؕcc~_i] 5'0<M A֌~‹Kmko~sU?daw_OP .g0qcߟARz2o&%0Gh$Sd.ѳ埍gߋa_ ?! Ӑ¡@"^[]ێ۶ 3^%_xeB 6,쮞k8x~|e[W|] :hq3E$x2xAzqmCsxaGm`BԜqyWˋϻ=UJOpR1TZب"T2C2T)H^@HkT y0ޏ6WmVᷭ#Kg$|X,w*zdJIP,I]F4J8(\E yD&E>H%-L%ʬbN0p5@sٻ:W<`#V,ey[`_vfԅq"Cs_,Eۺb}>͏ʉĹeq-mqg*lzӫC7݇j'3?9"gnSmOjCʏ0ϙBt/fZmu/]jZj 1cFlێ؀IʟaӹV3g3ZhP=*ҐkXf-4J 0˃ 1%u;g@"DZOŹn5!Z.8!͙>J% usB"9^ey5SW6 ɗ:g;nWM9!_q<΁K4jktHL}dJ\Pfy[+X(^#w5g<<8;448,@\[(diG!YmdSmf)one1 Js΄@sTɀ8X thJJƕq8)TQ~NQρ[Wk,]@lBz &Ll %r3Z{y5!_gvPRm$hh@djfvAŚ;A b0ɺncZQ'qx=IX{95 {NC)OMqPt߀Q-$l=*e7{V*WӉ{VәNof0)Fr扄!+kQVA 2?*XCw^B !h[/#ⲽ;e>q@}8 8P{ Rt$7 gҙ0E1Gԫj-;<փ4Xy KL=FZy}uߛr4} O,E9/H؋M؄.uUlr+TDPR K4{Ij ["*I$n le,̚^D=g]EYGVX qDǢuU`^(@A5\JFթ`F7C󟳂RؕV橫sago^j\2[ʿLnH~wcQ$aݢR< cNTF,K5ªGَO-`i`@V=@S)AJɽJ.6@!`bc֒bfT1MA'4\t\P㹈\c=6(j :ʬ6F/:TsxGQj2)5ږթb,'vm c(*HM-!qT49:ed#*vqХ5R)ƎN=^tM] ӁO&qҁ'gyë^X) HCrER[)uO&ߣ)o' x/Y^Db f,{KRқ-(J+ ֭祉RtK ^zDERjN%$:{WbDW4'HO $j}MRԈ룋QK6BshQU7u{[Wjj⥃UhRBRC$2YGG'"pÀvČs tRуhsSu4h

f8uMzfO=Qb-ꉹrL k&Cr[x;Vϡv2\Tg1TӲx\Ĥlӌ?@gҽR}nԃlmUNW4Q=#CQsL,# 9- tklSjBj=v썚3vuv&5s3 m(a.3=9e_9G|jٖ98Xmp蕴#F縳aG@h;8ѳN3y.i9m};K@4/p;|)pgc\piՔ7LkJ)hrKr 9_*/g7]bL+tV a?;~㣷tlCη!K1~unj)gclvNilLc閷UI`§^z7q)?DAݤo\꾷>,\lWs1W.m޺RF 4W1"F~F&\ՔՔ2n^(s>#s5l̕<s5ͳ)e zJpz}üM9>j/ ,=ë~FߏS%`ׅ4 ZHs4rg:~_=2ur~zS>/R+XR ]aFhX*v=Gʌ xԤ=O58Ch@.?NkfKݼS~N~z}"ԳJ]'g˭?ۏ?zqmo2~_y|QPa2+kLzP #nh 5% eFĜa$Y=ik,=Ԉ{ɲ~y܅YK5cPGB(+[qhݙ;PPݭ1=+RL#ĜbhEтvH y"Ĺe{IlO߾>9t&;ĮqM>w٤lRSZySrןpn|k ){хKbه޴o#;v&}XU9/ :VI-41' fsh`p)Ab*K(,ȗs_O`JL)5p5 Lz PԕXq"l4Fa-m=K~yqv8G!wU s E?5m`BȦLW TVYQP"Ɣ]c]IjLqPQM$`%Сf)UC(Wjz_ΫQTy3+4. 6Iu T fdM*= $w _?3ԷOB5nC9NM4 25 aZ w>`uMNVZ=Ng2\X{95 {NC)OMqPt߀Q-$l=*e7{VƏ[? |3#b$gHR^e!0(cSO5t%⎶2".ۻc[f蓭o5/heו܆ł,Is lǯt0_QQv)X1:Zvx 27li}+_bt>~)))L/9%r a{88{{?|xJR3-seBy)uy/IYqKRdC%)\M Rق@֜P (17 8!X>\/@b`^(@A5\JFթ`F7C󟳂RؕV橫s3O=xsUE||ƒ=գVkI :(7 úELyhb$Ǹ̯۲w]LAvl)5"$*H  ,yp&cZKAɂRp H7%l_P?pvC=_shwoeVXUF=~9(~5z܃FmK{j_:Ƅ=P2 \[iSKHͣdNY$At-tiͽw ;6},{-`[3wM2hO>Q:0^׶j:bu{D1ԺK^(Y4.UHrQ4j=jvҀEk!m$k.YBRnظ[mX/qaMH1pMT$1QB"w%FtULR,='ۺU5xWT`}t\2*@rFcX]j&M+d5q &^:XZu&%.5D")Se,~tz+ W1 HkWks#19'+Tl`4DfMh9g:8ØGƅvo[yv| ',V\MAVG$ I쉹I̱s7JM+mHeX A@Y! }J)a ߷z8$% %iutUuuU土c 9* smer8Ss<łS*e%*S).en뛋5?M6d) ]]^6vII^NYie_"v=j˽y{_*b`?Ww3FU~U{,J͊iZp"iNKK%8T.2C7P"A=1=Xb@s5>pGJH- QN)9kc +]4oV iW^,z7zdzunwxtkIѮծ+i= (Ux5ʔ  Ƙcx8뭽 mI 9Zr;Afx URAR^U Qk8jDU U@T+4&/K>Tܹ#IA<`Ls&Q517H͑kB/zIq"ǗQVSꮻ)DI `FFqʼn#ʽHRnp["M7]x:uO>ؕ;Jl0|Nk/VĦZ t>7Ef;RYol66"8SxG *%D K)q1ap$bP-x1`-Xhm ;:Z0rD`p'R10[mq~2͟=ZS%X*lbW4=Y/-K"UrLv2iwI&&e4uQO)e¿ګLFm?ti ZT,z >\p464<ʵ!,ܘCE!@2\`bfɁҢ'Ӎje;Ne'Z]ZbU©bUCgB/{$|/d t^.b?eͷBX֛`z3IKX.`dߥx}kRa䀫iOY4oiήM|Za8_%V;v9ysnu;vtozlrv;dqI`yM"<9x_K>̹͞GP< "[+2L%&wktnJ)w3Zٷ.3-޾jf,&ZB]}gK{u -p7Y89kVI:}3.$Adu;?gկY?^I;fоjҴJj&M1R}4^&%xj7S73UT(uJ>:`CؖV[cHR$[JJmnw$ᢏ$pJdX$D2CHhd}"y6o;fP Ǔh Ky4[J=Xr&WhL0JPqQ^gؙ\ D0`)^G+AYsᬷ,61Ç sGrt nb sG"^pH!WGGDQ|A[ NQjj=$O%[Tl}B ZDl ha[0AyAc͇MgJ\>cq~c`vx>+/x{p?j}Q{G;K2{4g-Tч)? ך utGub@Vd91_oiaj._Ea6E%@*e( Vho CB 'ل/ulWtx.MSr+,ϟUlJfJFYU_ŕQe D%pN1FbdCgRo\F@K;I^5->\VTxGDJ%Ȣ<`LEYJb,#F br7Dz1 Л5[73>䯃"D%#I fH{hR"(DY!ÐSJ|sD4Nu M;!A@ 8@l+ .DBpђe ZӠ\3'"(y%n 0&I3J$YrsQ_{ۘw瞆88Nח."i6%_,vU)$Im~qS qi;Fٻߓ뮆ٓE\u1קGcG;G$Oxb2~p ήOv4;_kKbm$@jWq $#]V C,k`DzBF UL~-h1f3rZUJQg/Y5jջ2Ũ3Is CBc-DEav\ DijE>|9HɞStزКKLg~'0m`23ܴbV?J".r q; Pp /X9kj#LCUm'ÞNXGA$b1ۄ_dqv Z[V5v};f"//cr:=6>幎q>U'3DE c?w*#!a:e@șealiuj9dpGHxS_Cm4k@{{Ϩ%O,сre~}ڑ cSXՄ^d0 >s`N ;Ŝ(aH͂qzHi>ɻIv@$n<ͧlyއ} < xAzFaxrtAH)W")zO8=$O%`}DIQA(Qa@>z;௘.F"y\7|Z# g,9=(8 y 2pr7\_aYWE? ԍ@Qݖ] 4LIք≁wQJ[3U$He"2=G*-ACicb6xp0M- yڈdնO +UN$q{oq~HA1?ΣANQqtϏa MMlVZ-s'%Ι*WiAHg|)C,M~a\o+M&=# 3n]n$'9&<(a);q;~2,VaH'R4-O"ƴ(W@BK]v# 6+ElzC=S~/F~6a28^:0E $R\)3 6 w`UsFM !8^2Lޖ\r77x|oisՆ^jSa"8q}.ڿ2N4. 9v 6򹖜N)bxЄ>|.hAvΊӾSEw-ʒ;4 "kŤQ B<]~2{ Ը b9rlC4涋ۼ.85ݱ ȟ?W[w_Wrޏ1c(z*:Mpؙ~}E<˳o |Zo54_w0_c<"7  #9NȪK?^lOz[]i/baF9/<.eFSxޡYo6{l&4mx)rm7nP@V{!}GgǼndFEO[]ܘvu,9$4^z+7Ӝsl٧Rvs ,r0(B";#K#\msFǿ JonbH*]Mm.u܋˺Rh,2)9_7@J,J Jb fnHܐfԟ lyyzag#: 03.*˔@6%QfF A+%fX~=0A76\~}Л.<\k?k'k%Tﻶl??N?=NY QsjE dewq?{MKSmB[_ߺK}#%YLZ_q!%lȌT*("9OF'|ca+r.)ԣx1.R[;-{Y>_8ddBQ$N*b{kÃtF{so/wn nnݬ+C/xs0xҟyЊmKȭoGfO [r0uIM͛|z?\Of,/j170|8?O/gnR.f~]$'+m&5YU[st9&W\nm W6]DBh )Wq̽tf'U+I͞f,~~+x^q[iaVJAӫ`\s[qmWmm>͌p7򟮛J +oz\tHc2kW٣<~_zF %}ݷW^^u^oiaQjśY+775Zaq;b֜ؔf.ف6~ymV!^n߳.~,Fxw ڛz,>j?ڧytͩ9EBhnW?N!]OҬ™k17.| o< 7~oPKjmx>M ~腘ՒMvsjQ,f[̲syae7^Pf#csvd!5aw52[^wdlP Yz/h\%]͟.Z]]ctjV;8z5s"8kG)(u¹q.]֥H^gmmTe20(n3&d潆V4=қ'^ShH30XyK|D?2 I _y%8NijEpo} o?dC>զG_jy}#|W7&ãK%Mw* ?iN|ử c ۩dw}W2&@^).^nYέ=stPGF|Ԃ.\{hZ}l!>Wݭ5[6,6icQ=oޞ]..hŷU5:7Y S@?7`cMyr\Ao>y+haۼwO-^NV<TھhJ剒:hkyCmX-dbٮW%Jך}iA%<5X0%10isW"YC)I Iҁf-5rtZ{qZ~!щ|$ŬWsﱓMs۪۪[6>FxO6_>iWO}OVzkw6&YxTfѼ܁y)n^aҡjN2(R1+*psO6qdz BB(a.^KKI 1PRrR!ehu@o9%5\02x=OpzfMDf6=MtnO'rjLUh&ڃ\~Has%4K-گuVS}j r_JJT1'?䲺$!l\D&,x McW6&0>&ǣ]2fmH/ M6dcL2*<?-X%c4,:/.R7Ž]Wln ^8Myv'P|wZK#}2=Jkʡ[t2zl\Aģ`q*v=ޯ|wp`g^:VoI3~>@~1<'fe]k <58LCfA1G=mnq1ҶH!HHΛD;=%l8.ڄ:jGuy jm~n\ԑX*L0L N2M^b51ǔ`Fȵ[1wW {Hv3O~R>wPҘbԁV&H>&!cJ^DF& JQz)іpDNk(!J )Z#JjLI3Tjz&borHR۝.+쭗lE3őxd2xk' %>$p^a/g[Z#Px[ؔgnX#wqywjzcՏ{Z(45[ iYL-ky)@-r虷P%gc̼e,xwNff2%o.1Owj\gi0hďH/DBm|nѧ"KvscwzvA}>ڪi,}u)l~3WXnrs|RN(`[)d2 WmWai^(6E_~QܓU5TmȮ9 i \>66)-VEmZs1l-{eNՉ Ko$lʖȢת;-a)^ ,2RAݯۥBNh,4{ddy*)we^Uu{hF<>qZSUQsJթ0Sy-%7m~7M!6c ]'_j[:kZf嗮 @|+]1wǧ:L&5xa* `Ɋ \%J ZÆAV!k6<&+AN7lVnn*d\W6=% \`CD1BԔ+TBKuG\YӶB+kE)޺BKuG\b+W(Xbpr /WR6t\J.F\ +\3rY1BBWr5pp%a+lT1B֖+P˩:Ph]$pa+t1ȡ TF\ ڲp*'Ԏrm13ֲ[WR:D\mV \\VB WrWRo~HdMiMuc+$Xv.mb L`M4ʕ8Ӡ1fr0lzuL*; tS =VઓJE6#:Jڴ%\p JY PCD:@\1ˍ ۽/&RpŬ ZM%CZ1RP g-P3*h]"$ҖJsG\\FKW&n*h]" R P- >v*ňCĕ&zz 6\\J WJ;qe(1 xi U Z9t\JW+J)f57w XĄyyflWe )?֩3օv].ͩ޹9 >,e|ito(= c)Vnr߂&n@弔-J-‹ -5^-nվ0ukGZCv.Xn*9t\W6=UHU@0e\\Jflh;PG\= Lp jKUP6W+ \`\Zņ+T9ՈgU% Eբ\Z+`cq)VTKY P*WVۡ U3qR+=ʙDNMu*툫CxA,$+W(WRpj%u*وCĕ1\ђ+M1BuWV 9t\J5W l5?)9ecÍeJի;ܷJӂ0 -gF[T?#*1b0mlzsL jUJ xM+tSwStqiSulX1BV+Psu{\JG\ p/W(WRpj:P#W \`Eդ\Z#+Tiňĕ$ȒbW 95\ZwTr6qȳW(ؖ Z>x UW+MҢ \`-D1BR+TBf q$'EIC ֖+>b!{M gOvPim*vxLHjwmm$IWc߶i%&`0`v_vX-$z%=9A6muqu DX<yDVbFvאm}wsMP_'~K4]@QYߞn zH:S`?vnw/~ ggFwo:*uߡql}\"XDi׶~>5kH?Nvz0 O6Ƹ??8B?50MS|j}iﻙpwDfqY#{~6߽0>U ^ݽ<>5 =w?gt<<!,jͧ+w3S>2@_0hCzyu׾/]A|?kl~OWq k}tmf%t]ջc־l /Nkd&~zwyZnRfd]JzQpSP%egM z*8]2ts*pc+! حj.ԹZrSmr)fͰ8iA5K?cTgkga3tg,4ta}mB! T˜S ]6yCюzhs&tDK)je"9цsPU,QFɖV1IoB3Z Ct%'+>'ѢkMsnϯ/ bi j+053\dlr=Pr]SJ0'Z[B,=yɌa\V"1>olfVHSzL-x~ <L{}&6.M݃66v#$gM!:F20T`&?!Kh*gs#QG oT#FSHQ"_^WTyoߟ*6YtT:DyKP[рOBr>=7'!hUU oZϩTRnm5ljxIl1/Xc;Sknws=i:wK9jsi g16c@FUy ml VRjH) AF5K!RXuݷn\ D@K$X'5/V[ZQW!5*dlzaCKU+ܜ-! Db$ۃB6rZQ8b!z_4yZȠΌ>HK^ܡA{׋qJb6(!9Yh>3(RAUvNZ'$̿2s񈝧Lo:@G6^&*fFVW 2r i(yG a |Y. Ao%CJ$JET2R,zGpK`)`^(^b u6B[!q2p O},0PFu5wNu2<&[WB(NCO5 1t/7ggx|cRhEUv!ȕ'X1Xk3tdD4Hcc.u%6h /fsP6DR& +28Œbk ) 1튪h,=kpⶦ3*Z.fX;6@D]C:x XbU30be@0 9>F9KP0\qjh}0'<7WX"wiN7s=W&ٱ-Y0 36{ 6YS@ظ2}nPpjqHuutk昴58)knĘv3@99h+7H#t5n(!/{úSi樇*tyF0C{s":Y'%\aۊLv0X$SSAv =>3`yÎWVhq+Ikj]Є:H' 9)E*aX0jH c,1@Z*(.FR<&z U @*cKULmqLG[Pc@sກ6[[1siW jYTAZQkԦ5t&1jGb+Z{Ok[r=Fw *6LLP3_xs9G],z!@6(=< Ҭ5MD9 Z&PZ #Wfׅiq#0Q%#p45PzB%aPJ↑$54JCi1[.ՂʠriL  kȎEЬ TCl.Z:ՌHPQλU~Fz+ZP^) cV/ߋ[6~+noXez(N )+Mݫ(NϿ{?vD"QW(;J]o_Voyo>N^R7ح6^_Bn6i|v;˻վIo8֧_lO^ߍO~ћOs{m8ӻɛ7ڄ ?$A4snϴ=zx6'J'm"oBnn oW?Sq֋jS[ =y_u81P%9878z FG]F@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nuy^ȓrq9N ϫqA(:x'PNtTUJ@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $NuA$'H'q-wO:B':S?NW'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qn@uZ7q1hUC ('1: ފH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 t[GWF%߷a. ;>Cl߶g}޷j28bMY b/6W[kN}[Y+Bq;C@qE[868iX,Zɍr"ɒdڑw)rH|fvTz*!$G|~V&Cr<љ]oKsd2O t2:hw]|9}h <~{ߩ HɌG *8y%Y`kJ{rFq!D ;MO}NKi3:}fvUp9)@c+ '9'{M஝·5M`9.b4g$!'! c}:u)Ju&(G2 }m5wSVZ(⚓i?RJ>zB lǩgg:yo'O W{{ZOZ%rQX WN=s[uBp#>*⺓+V zpUoF+Jv2pUUTHk])l++0q2pUĕ'îWEJG[JN  l:*ړaW(HL W \iO \ \i9;v*R;-\= \j9'(î \iѳ"-\}peL\$2qɰ+ߗ?\)ej1)=,?{):zvU4o]IlWgZp57rZIWlBT8ҳY(ApP&k?lg'b>~wu_>w+^( +)bD$fNՅzV~j% |iʳBRŦ Z, ¦Y+_~+GHnyb(xiڟ},j>^?;h mheʯ_SX/CH֍r4gw^.=qYSȰ{~B&f&h`4aQ*>eĭ:jkǐR%?э |$0ȕ<g\Gu]fd:$ dX*3JsJp<5) ) d>~&JoX2e^fifzxb~TjθDwB#jsluZ`ͅ{E Yͤj] /bY9 GAJ+?>GχWj܀l^_/wk:+hblnϏzK! Qݚ_/1NU zB&$ro3^qR4SLLFY'̀<g/WF#9ۈ b8|5.yoH~vk!0x_[n.nz~tН #2Yo~q{糍bgk+=[U!ΥE΋a5.&QU7gc]g8^t=UhүVkCU^\kV:%ktb랢/_q|f/S^ү̠a=\zI 7x7@]4etpk|ftA֫W#˫Gj_y4B&TdъERƇhq~xѢcDH(ŸG<$ &d:\NFƌ7e41jL(tL IƐ6vikI9+]ڸrT8ЉG$A1#A?<QJ 8I꼞a6@u('Q.9H$+(YTgy1Xr(!]R[?]O9bS< ZfU)r+XB(O9JL,cT)m"Ee@Dhr @YaefVg<,?cڞ1[65+:lі!_Y n;AFʉj=}I>ZD~!9sNeNY&ւECΒ*IϞ.Q8-D"'3GtAz9%罤XߏMyd& +xәl΍'q PNUT:3y] dw^Ք +()'\,g4A(VgeO;+#";,d(Üܤ2JWOglu^^BӐM'\MpU͔:| ̺M 5_MNA50@xo}u z꺚ez\뫾^1C:[Eh"lޢ zs6?K/ɬq8.l={ahbI%\[wv~QM+gPJ0Է%inG7>ܥw+-W~G#&>ˎz REEW~m#DṛF>^hmMDhJm5Fڞ(,#m>ȧ (p 6i#A+MD =@|npj]AA4K&QǤKI E5ٖ'΅Iʁ YZr|0lpර <x=ࣻw z7o%HD9֛\j !Tr6h*)j!8(Zͦイqt`- *[+xYH &L1Ɔ 1q6Tx7S2=~oLOnE鎑e\r]<{$Æͅ@Trl!Pld<6YgQcfIj NmMN)4`mmHz "R&exlHiSg1qFJcXؘfa,->(z3fu'j WLSΩ4prR22Ta5}9Q- ~l2(sl ܧ:Lb/O|~'>k?iy$R#;Yqo8xꅐB4}bq{peJkJ ["$PNj~m}9^R7L5ͭyRp}~r_v+kk-kJFtc2YjEYhVuy$yZI3EAs5n'\r U`7Bh 8% ϑGFHT-w[͜3);](SzU!;l1|W[_[XO6KT78,0V IᢚJ{Ęb )xcLTVxeFR-4<"гzZڐ1' \ߢ9!ILCN\p)FiuJ9+R,ߒ1 -4}RZyL4kZF H ֻ:RB1jj1 FXoS 6+h82tP{=^[~ _jċ }Yޘoǫ8:K2iDЄM{*tNqDP A"΋;oj|ϟZT^/k{gҏ^).CYM'u iv;BǹΤߟB×0?]ͬ3P7i%PG鈉rί")"zg_Is;v:w>!P ("U ׾8<[ư^q:Ga}K? /KʆDpo4^ML(y}vFv.qvocD |ġ3;J| gH_eԽ ?Cet  8&$4F s d6)#8x&H\\?Ls9OZgo?{Wȑ ؇]`#;ؙ}ZF^ִZI}#yDM%b,Ub*""XԐs,STSGR|d6oų%\OorN!՜})F;^?m;t `F-#Ғ>L&4N|;?N;H)˴ru 5p,Ҟ?>ggH~XIf̲8rw| wb5{vko/3)f n҂']a6T;cH+:bzyPi>ì7eh6ɔ1j&?Xmz٪!>?Us+4:\9 ^M~KE/](%_g h[T-ow~&35Ĩ4"|r_;mC>+A*D(E54r Izݿh{E&KZKD1"Sp!թxHN;p*ۨPC4IPҳ%e/5k~72" /f>dʆ+F־:}')f E!9^ 0,zmκIKfqޯUGmb(ծICL5talhf=^ZD(s9D]Lt)2) -Tas>뗅w0_V1,[3Nm :,ғ,E8M2zAgZ$F P.%! 諶3#sl`)Z_.(*T Di Ojq8[L5G[JX9bUԱOntʝW>ʏv6t]~W<1ڮYIN7Nqymbfe` Igs-&"xVbIg  R8Y,vvRtY3fEbmlaMBFiĔ ,S UvJ+]4ۺ:a EP mSRd<O筍E5gK;կ_-bO(jM~[ڪ«p’AA0 yʹ:D&]^FcY۸͛}/[E,&Q!R@򲆨4JzP "Jdosj5ZbIOZo&f`l {Nej1iu"ѫf:֞hJ3 ,o"YۄڈNtY`RE*b0*:M\}B `9^ӠVxQarQCAl(L)[6IS/d%dj@l@S2Nsz0&دe+83 {J6tt5}Zn~Ó .#/8uЁ]k)m  kSQ dJ% j<JƥbUm %AHNEfY+P,Rg8`ބl _dc0Q* ML5 xus̨ #)Rb/e2zdcj?X&>  JiP'dӉFa@sa\%.(l< wM"fPRUNE1TaA(4 )&j'4b%vj%^s/e"Zf @y_(Y] i!rL^2 l)!K<^˒M9}2H66f|54֍%Eh.H] Evʬ'9jb!*|̾$ 6IƲcy"BYTDmH$"16*&ڞ2Ck=¿:c MRT}l }Jtwa;P`N)%207>gF眂F=87P~}M_o͢mo}G&%~u|_ǫ ב2qQxZ~fzZ T2+4KVeKb6'2?w޻C]|ITx=%LD>/^|]=cb[:p6, .١駿3_ܞ_.`gGd~ݟG\ل7kױrn7#gKQW?<࿿mWD|aCE:#$GSg 5N.ZaceX|g NV(u_tO_'C[ƨ ?aKv>^cOe!x)c($AR;%-l΀cL8Hջm^$= ,BI:+El:'HmFy(q=-8sOssr$'Lg\M)K71'xa/#5=Pu;c 6GGkmצ5wt0;kX_\| /I%;HBtuTG1%(D1dh }()wc'YcGA`,(Er%`6G 4B՚7L|%=c/3z :]8?a}aaZ_գyTr?zÐQ8'LIzɣ(P1?DA J20j1fv\&sAk^Eп] 1ojo>@?-w= wUR>EYq١ny(,g !&ȮHJ`&bG(*sw&Z^4t~~t{9usu8Fas▋=l<_n<,b%ir0Ygr:M@YZ蝦]nyr#IKHZߗ1<]c&EO膯n\JߌF?ygW?{7{*ow9j#0.gܶ{@w=X\ptOz~Z%/TpAzw͐%\ j3Ƕ;ށbR( |D9gU\ǒs҂}9RJ;䜽383=y,Θ+~Vn_2:2EtC,T+6US҆x^Et}6Gb;h辰=t1<14BZ[S@f|}"3:ǎTQh;t `u.|}uΫ?dUzVـpDOG6RR9)}ks(.tmŸ)uNP >84± {*PտI3=BI*QYcu2([( m7F9uE۱}+š +/P/h]?υxg-\3=܅ۛs[%Lj^5@%GLBd[u"jAۤH")4E ̄1P4"x@L'R& 3S`V(#P0 uX0*Zh&kɡv,OVKFwN"H!5([Mh3qG4[9w,?x+ޏwi☯҇2K|775zmՏƚa}3!Y1^= ^Rnr{$s);t).;(2 =|WF 6HG% VgBt5I%P{&D'@%3M6GLzJrs ?gڦk3efeu=h s P[yD,0)j<*5}/RaX[,vn2>F}[ͬADaP]U#`]Ke?[5fè Z"D9(LQdIN(,)ݮڄ6!0ݨj: b}AOE.N6{w th EgCVE,@vl]O)bj"E(7M%lE# {9x!i\IszwI;$\br:Ụk3[}xeNwѠ=d̛я}Y{6dӞ'+Yf1H+cwFFmzT߽2XG#T)k^(sbX/efj91>m'NO;C_d*qc+m"P5 pHeCj`*Bmvұ;7^\=NGYč=unTK3ķ ,k *AW*Z PVEyqyss{u`:k_ó= c".ku2sR?ʴ9|Zg2 8 4d`o({{LC o”04LC2J%+v ѕNEWB v] e=H]Y3 銁x.] MFWBSȺ;`NcPpC21-iCuOi{a,&+==HWL J(ʺu;Kޜ/g.>u9}=Rq2eb2oXp).V);$'AȌz̶ ±ė|vi1Yw{* )տw2a툉J+}Uso5'G|/gWWiK{5gj1_A$CywAe҃l5 ?Gp:6詀qu¸2pI5(;es?AgYSZtUcc KvkB֕FtW&PƜ] 7a[5ƫ tTӂ1(d] PW IHWԮ$SZ1(kWCԕu6'P;{ȠIEWBko }WCԕJJHW NJpu2#BK鬫{!%]10:HvŴS/NuD!*5f/̗6SrS C6CqC*f }/ЉVLi?M㎡lj dL8:"@Fkzjwm&+̺7|@8dt!(ЩJhוPd] PWF"݀=&+R!] -u%+T`J&ZCJ(sj"mtB`㚾F&v] YWԕĐRW;dt%֧+ )mle]EW\8AYuѢ]WBApt5w桺b4g%u%g] QW\<71F#H|F\4tˁQo<)3fcro,2F& CȺUcsdc_K8$RѨW /E3bksrq2?8_Ei[TuM7mӖZU"Wڅt-ZW4Qxe)7-ma_T^x%1\Eٯ$ẹ/^ttr]Oͮ)_޶S>dzs[ I|}cE;/柉O?Y`j?F,j+X* /_<(~̗,zO' `&#`'`4a|kBRtfd nHWiIG?T(1O |zhDiouwg]u =wDn>2]Q]Qվ_D]9i pHqJ]hJc+O~,YujEq TtŴwŔ֨*>4~LMᚊ3}o`IHҙo֍E?C(Cb|4mw (kU'`U7\yDm_wVte =p{¥+FѕNEWBG(mȺ dmBb` :] 6JhZ_elmd]EWiTBb`o1] OvŴAu%:׮+t 銁w%Χ+}J(MuE^vMHW l($+MGWdu%Zg] PWr2Gb\P.] J(ɺT`NߕNW7ھfw>D] t%&+ ̻bO>Kp] e(Y|YTu u)cA2c5yN:[u 2꼕Y2[u,#Vf:Lq++ژ~R,SdZW :zh; tzB6D?)7׶] w-C&*8tԻ:SuåFzgB~QuЕ˺7ZXasJp:֪u%muu]8XHFWB*2 2J(ctuu]rOů+ԮPyi ~rj"tBb`ө]1W6] u%d+ dIHWLJpOEWBbJ|u48L10] qJh #CԕHI5آNFWKt <\WL˺# ~`jO*(NoOǘ \f\R:M ^Bi!kѴ1~ ?=p[Wp-<Ѝ{э2鱾|վcلt>] SiRJ(couu]i|kvJp)ڕ:]WB]uePJv%ѕdjWB!v] eȵ!  JKFW{%IWT]WBiruEIIW NcPpѥ+u6v] eYWԕ%..!]1ǐA*bڠJ+Gh V'+MGWB}bJrcp6_Wi 2.E7ZcוPu5D]}*tʗW1cf-r9~-qy7hLrxt UpG(ZUi D!E*-S a-MP{cZ\LkAhmyZ;>L5J ^5.t0] өJhCAu5@]C)ծ8 +'p]1}W)<^݇J}:RZбJ(c[=-G7zU\J/FKD:mY[*nՁ/׫B!?M_s־\A8JJWޝP/ã6Ÿ:dŋoo*nDuu~z.)g8of\X:kH}"kCFU;[-*){/?%Q%.ur/޷J蔯ZܣenFů\"xj^}%GwlWP݊eDUTu"{Ӳ:+5=|w'yG?C]pF7yɵ_{y? y3_UW_qƌ-iN}MmzS.QePYj/VP䗼wo~7+$ͯAul7uS7im@ͮgu.VTJhmm*CAFs%A P p핾KtY1>PTƨV|oӤeY65|` :Uۢ7) g@=~+5ktMj.*puP;Dž!oK.ZU++s|BVZ h5M4\BW(8P4l`,)֕nSMjvqq{UmemƺrXF2%פq,euCYz 3fcTƔnۺVXƇ*BYW;+; \1~֗ӾzKv\M(RP8A3N%O &]sPP-J+,?ep"Sm][,ʖfkVwmY3I~Lȇ 6|q3ֆ=3sئ7l=:!}tB(y iJ?y*ק,HUƙYnj(`ɘu!gk'Ss.ޝ7BUEzr7NTRύh(%%Ɋ] s eNJ}:r*.Di;F7Eע$S&mI!!QQ~X+ ҋHFVhw V\j() AJTm{(.խ` `#+ %D: D{j-x,;v4#H2](_ 6XSE@ +1ٽ*ؠdx4@shR'nGDN @dr*j]v_@26{,@8քFnu+yX$(Yw0Ь2vި^1Ls [(k) VXۘu9C*ZD@ %X_;wMBA[Y&J)w c[ljU!RcfQ1Mh(pu+8I#.G!HtP&׀R߄L 𯒡2UWHcYTB1 VnnB AvEm%rzeY j3Jր14ukPP,Lh=4v#]7cEfJW}kʃ AE΂GN Ä!6K17)_JN ufM0.Qlkq( QtA@GJ Mpf+%Ji,7;Y#<;"JQвAw/uU qj2R+Qa+uY8.RVKt Q42/oHH&9%J,[%Eࠅj`dV."2Db0Է{2 {xWе?|p.#hv!>38oG nz3RTiŬ⤡cLbb󢐴C:IF&mrL0Ų9קdƏW*ՂG{{.0AABZ$L>%@uP<@nr4` +J$W{HV*u2P hyCs ~;X|E ) >@M&jZ5 >xmBV5 U44/! O×Ud 2inuG-A@8nTBN5~T}%TU(@6t HVE]@,M$C.2T(>ZbM=A%DB>hb}(T—H6//sgmG_I@'SєAᡔiD:ZuIBv ԁ>赈 )-fm` N5M(3{Nw=/ hDhQ:fƮv6 SHQ-fWbd@BUq@ơ#2ΪU%oCʰ"  De#@ A36%VmZbM[JZ!b8i$ߦT%L0AlmRih\csw^ym-\ X,͇2]s$Y{M`0u1vtrlѓХE4Iv=IlQf18k0(֚BY[tڠ@vǝ&X©vltEkaӨP/zBL iqf0KFDSatE8 8%mCɵa+f(Э_ F<A4DL(4fSMhpUf\. "ˡꘅjIw`БdpN۩ -uQՒ-* m'3tuE#Bޙ#Bߒ`j?ʯzFf/n^,ްK4'b@ !dEtp7~!: ?lC״(Z1fG~ ڛC̾:[I٢shzvI9嗋ŏ)_^~Nk;[._&~ћsu׋%r|}qTXy!1..KqAGZ/6S;?o7)x2{cE<|iܺ9/s[m[~JSxgٸC[ wR( ̷ҟ(ybР)XK=' {JŃtD N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:\'٤afiM O (7bNr5?@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tNV 9WM ( tN tAv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; ^N 8ƽ&P 3; d =; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zk} 3jJg=no]Z}<[>b dǔKƨ =ˆj5К%tK`\zg!`#d*;"F;]Jʈ ӱ+BtE(g:@J w#V6W) un :‚=}ѫ} s&L۞ {bг? 7{G+FsYd y"T>~y[䥙M 'CӀk}MZ%LӇIZw#gο'⌂@ɀ Xi/ߴr\-u1|ծ%r/O@hglFV޿h#vEnHt8+^~_t_ 'w\g_c#Վi篯rlۯlUsTʴ;%E_t'3XX/'(V.I=ʧdqOAqw,LtBi9!"AMntS+;]JLX>{v*tEhwRW+ qBtjBW"!0] ]'CWWvPztute?f8"a2ְw"%py N3Hpd NW20]"]Mœ]E+k&Nhod 1] ] ݤ֮+Tڽo cuut|ܵ,i]oݱ_)$/hp[ݍd1Klq]~rռlW`to3,_chb1oWziXl6cCp⦾|Z맑^=+D'gO_O;\07M#W(Voey誩hQJ*KG}ͿЖÛUyTι{p="Ig{3I"} aY$Y] Z<S=1Z113]t:V|906@jE퇟w0spSYZ zo 8Xlxr`o:tu?v e{@+tЩQX'DWT ]\BWed:DRxU +&CWw$}<]VJG3!` \BW}+BaBf:21o'DWl ]\kBW6}3N:DrBY5vI5"zABiҕN8!`#d\$:;]JKHWADՄ eR;~SSYrntUJ ?!"O~}{7NW2Raҕ]MF-"WOy 9k _uZW~q/'=%|_\N\~\H8r^!r|vī}E~hX}luGn{np}S^z9_w(>_F4~v9v֯I:swG{|Oh&ϧ˗˳p2Si<2޼ V@޻1=K3p7f"<\䒺sTt+m$GEyH![ nԱ a4%K*dQ}e9mYL /dЍ+FMzy #z!UH>HԠED b FрG!eLDzџ,YH/)T2C:Xt5t,/VuZ6 f^\,TM)Z\^eVv:of]TR,3^ZdwvA쩛Ux:}6ese'Tfת'z.Y19fˠV'k^g3c e & ¼&BP5;LjnDh:vF:x&y|Tj0ۢ ֤jc"/*Kiю.b׏[&P@_8̯UtOu4&*d䦽 f*սDʩyo0.PQw2坓Ld%'P%Cg8(OMjmWa܋7tgQ VH _Ͳo¤ux zXªfS"w CEGw#8-:@GZ#E2!M`n3&nE/&̶Cm:i2@ۥ` ZGK!zƌƁZ 1b"Q 4Tk%i?n+.H9*/=`K_TߍA/!z=Y7N7}7$ҕfC?Lo]wà5r \Zґnnn}ݷ`vN}97'Bjz[1ִoǤkw(KR~G>[{_ ^KV*co 3A\W@{^ԃ>M5w댴֋x5%1T)b+t@HmojRsQ b ЫU7RPBN6H뙐dhмCZg?tsXr URѰZ墬:V 6Q8*'pTR@gTDΫK 0YP 'g;#w9uqP6 ӌI3Mč2Rs$㚐G 9%쀈'8%"NG<Ð r^6(!""M( .G{9-A &n c |fr]w-nAAv>wp('qjFn͠[U#|fnYf=uL|ޘ]lDp #,2rdUJ&@R8cPH:+*iB];yCh4BkM@C܁т#*$;A Ut,soY߀I^{#0EVĆv4b;ChՑmaW4e%Ӳ$mX9'.˲PŠn/zd|sR̍3L(A)WQb2Y&gkw~>2z6o%d0 C[#am;$ *y<9ѽOpoE_!y?j3k}?*:="@O=r:9.HpY@s,Θ:c1b ГN/C]؏i"zA=9x,Ѓhv5g0dȵ((T`0sHc5@l2 NXQu+¥&t$L1jv)<`R5f6rW]h#;|[ueK\K |UW3{xM8UuËiYLaYTN( K< bu!‰}%x}*aYiC(X*l'\sc$8[Co \O52"_ M,7f2诔PR\dU kMT|9y kE0>_Bsig 9R:EPRQ<(xT=4O%`}DNQ:bPTir8}VQv.FԢy\7bLFR'Ϭ9^(8 y 2prM2huhQ!:CCwu/PTe׳w߃ȚRJ< *ES&ߺ7VDH%ȁy(mrzL =O5~5~H@Vm~ ОY|[%<9?WuҌ}~;42;f|᜗/NOZx:0|&P? ZfNJ1IU$Yt!`geP!ơ 2ZiϷ^j) 1^4.ewp|&{yǍTxMn%.|hVd ̠ifX+&M8<|{Xr *l%0-yG- ɲL.y],Ԕc _V;GmsALi0C)멨4ſaD񘷣.ϫ"4O0?+&<iX}mYq6i߀'e頷U?M}+&}dffjffJkb:<^O DЬ#Cf{yAJދ' wVcZMT^Tܭ0k޼Sg鐈x"FnrgKtܜ@`D)Yo-i^:0B!zT ũYj«! /x;#̔Ba,h69ƞG Pc'fRxmQ6ВFգxUtmq`mQuAzM͸u1_ CwgmoJ)k?~m8gmoOm Y!rFBBe,YJL&0"~ &{K!o4~pO1Y#FQ8TX봿b\&J^=Q\'4(ǩ܁N뺂{@¼PKOU,|H+'ORNx` Ee81, k&P^$Fo-;Co߂?{WG75$J 8`p P'ζfߏv;=m9.5@2AUu5J$Ҙ_fg^7|n&^Ʒ y$~؆7۶G9ұ>YMxοjԷ$,1LM O9e;Aӿs(1.>oj𛱚%u71Fm U Z *tSЃTfm;uΧb LblhB}PLOc.MKTh8VhNiφ;ҍ?bo)KAЪA[?_w yc'CpBzt< $IH؊DPm8pٵ,Ж2`6;tՃŲ3U^[[A)D6PzPa!(P-Neݐ}}:o%9ܜGze|, &oam:$3Kɪ7 5JW?bOLʋ%?Y9%)oOV%0|^x>_6<6-3ՋdJֿ KQֳLKՋ\ @}Y鹯iťnRlca[.Z\ۆmmzy?67yqWG_}Q۳ק3gwR~f|S|^{}te>.EG}vӋ;RJ<=?ӇM+͈?d>{Bm8aNabV6 ~63gv^lؿ4f.lxOY" H9pQrtO)& Z"@Q8`XS3C0$e&ԌZR jԌkդStIMPl`6r sN6wweG|MFjGt=]lTwr9+G߿Z%m1dZ&4L{ɤ ϻ}}{39%.W^T~;x,5)xw)q5uzܳvCw;_k^ tup9{-Vz8z|Zءڏl$}+ 0ס2Ͼ|Bc'~Weesf3V=웺yc\\>zognbN6xGgp[7 Nݹ$~O&뎷;  yh|]}Q<߰ۻ#SZ0Obۦ_=mؤn}v>/#lŝ?]w!HvPMGrN)BS.O%;g%cnN ܼ7<ٝ{GbOħ )S ++bsDz $61UL&B4RjHuCˁ9YbKV0|3@Π)ys"3gWb_l=R|Rʝ| >_]"74;oԝ4*z-bOJ 4H-H]`ؒW/ZB\4wn#ߟ/zv(NfzFg+IE$DaoKRlzTަБu6DP lC- QJ37!!`=&lWj^'nZz[ҝ٦J@AT15!>WL63'%\CS?EF90wӏ,b!6y,lzt:o F\ R9Ti “tnBk1e` hzNՇCPbg{1_ާp 1rv~.)Ğ)0 CJp4+,*tL0KĞA#(ΰwt fSգ'l+~DU5`Q{ "Uohf+4=K{ C ha͔\vP;ź¢ͪװv c9A`㓒w%Oni=e$'V}zuTG'}G94 )ډS}H"7 A>,"}.LJl{zQ2)1R6@<֐HvezM67B0BkӳfÒ#V_@,V 5qbG/JojL֨kœmDnQ! 8QkqE{څ#ň0rIOu;]Ȥ 6Nt5 ?lͨl޵+)+QWBɈu(HARp/RAy8}Z  G6+ L_QMZa 830o+$՚o8wqQ.|b`Dw xoצ l I $P}AӤeX UQ d8 Q@yJc;$}k2ǂ9s^ #)RULĹcKh|4%[AJLƸIrrZ9k{T\R&qA-L~.;B[]애 $uoYlK"IlwGᦇ ejJ䣕RV{zaLߕEc( >MM&0ql)z eT^LuJ͢c3ұgA~kϺCm]h2`|p >2'2TRTv)c]JK3Lsav/N/¤]'Z !< c KP1fyٵxbk,b"& Ĺ`У%Pv9Z(&Aʐ/Q\R.qKk=孊ԂD4LJkcle혬u bI5HԱ}f6%RJ,PBv%zZ2_a *\#]YJVFɄJXDj!bz٪ft&+KF|1<<$KHr3&i"Ч"B "T]lɊONjDI)W.kCJ1qy[w3'YiC۾"& 5eYs6<)SN"ܦ2Zr{({S dŘWQd:hzpSՄ{lLKLN)7Ƒ+eGU"ه y}^lV, S5Z-դwk$hb7,N?_K)0FB}UjpR}-T4ӯ6e''5'rR{!54@zO?SG qU_>lsu]y5l/i/v[vLvD:NxYOeb7, ea^Zea$,β t].u+S+Hv#Ф!n!&d GyJ-5OQkbl7KN;t){@i5{p[cٯRJN52a:⎼vE3 )㳆=xE`N ZgNP~fD]`:~*jd g?vЪx1=eE}s Ew?Vh҂҂ц1Jn|;ޟ;!Oyai1h_FW(=tgozlՃHVN+FWˎ{ѕ.ufeHV+A4IG2OvMJiϮq ucf{nte+ΝGj$)4\=ôSpEWFu])%5*8`P34\FWFוQd֣s#]pGW3h)+خ]QWlVaGR`(2g$]ml^WJܖ]QW 0vuJ+Ån.מt]]RW}yӦ'}Ϲ0FmMBGgMeCX<ܥEaE.QMӏzKW _r.q/2(mZt].^teɵ+DpeWkԕA0teϰtete`P)UוGߍ WR/" u]%nu(#])FW mh^WFӦ*h=eW \FW+}26]PW".`O8v4pJi`]j_Rdp@QvR(,K/; WšGaMϣf[qORXCTR\nteK > t]!z#])0,>-fnJi5+jȌԑ 8]ғgцuej"dߑȶnteh}2JvV+ֶO=  8A7RѕB2ʃI8֣}JW]tshSl]WJvgp>2teu+ zѕ"+<آgzt%ڕGΠԋV\2JMW+UNln.\t r鱉<4S76ܥwG]6J4hZXr((jptY ghcG 3t%[ѕ/ <\^teKbEy8wzt_}ݓʀOjnJiueeWkԕgܓ 8q7RX0D2Zv(lZ%6,{*]nf0h|vpJRBEWFu]%Ѧ*B:ҕ v+MԋֻԺq uC@;ҕsFWV^tϮkWkԕʀFWh[וQtF]{ٍiV70nq[m2 eV%nF3EM7+doo!=s4rrNlF fӛ=< xpcO sAb7hQդcL1\^.-(-%nK U]Do2 Һ x0v ZQ"ф ]MWz%v+&Op{ѕߺ2ʦ c Jnte{ѕغ2MW+ԕIbOANOvezѕђk]W{ʴj"A =J}G׮ ]ueěV+:ҕH7R\dWF_26]PWA9=ѕ+ u]Ц5*qГXR?JnF(7]AW?|ZM'=[]Ԏ9=_.jS֢gs7݁g?~?]Pث1J?hۃkE[{b?xԡt-K+z˿~o_՛op5~,Dy +{?~|R wvn\kO!G䈃[g巹\v,?wCk5쥶w>-YYCCMۚ~'8Wg_>,,:GBEO{G#Q!<.j87§)P@Q!._iAZyL>>4sݸzS~sSl?{Őƌ$=L#53T wPOA2ٓJ$'/!Ehs?|c෯ZQow\ ŹSFݐ2{Q|! ٱ'ꘄTף>_y ѻz !]N6d䘇b'V)ry_h݁;赐 'q&JU 1A1jD8cq-s&=!xCpJ8y5BQ i֒jH@8*iCzjŧin 7^)ձX&R!N\ ͤxxʣsSJ&5{bK56f 1F4A3:*T(i*C. _h`D8>T7#E5}9*D-{Pm}@фUaFIJ ?_@Ad(V*Nĩ)fA8~t2N"!M */u=RگO۫\O$J.t*: Z2 F_$B1!{(X*8L)2z΍ڕ UkM>BPM D~—κ}FHrJ<) @c'C=e,Y۸vvY5|B1ڤCKƘ. E]JTH5)j~I!Р2xc#=5 U: ŗ\Y9$t 9S|ӖWcqAȅ\:M]̂ AF{AS.uI 堿Ռ&M?+L![&xA+{YT5Qs AiT4aM?8a RaY*!T;aѲx=僇2&5khkۚO `!S֓2.=ad򐦠u]#T|u(Bc~:*^5rqN17ughEc(RcyAQc5BP2r VJ1I<%Ie5\Q#HGrT0M8&գ6AHtJXHtECG&?h e'WM&1Y;EHfBv%ϠmG0@%5UM]2j253aEC8%D¦)P,"MAh D\):n*>Hh>DF: %6 /mፍPMhB)hZI+}H U20S )(yCs ~;XQ|E ".3(&HiDQd"̫2FMh: e>hCɣZ YLZ[x3$nm }UBNuPJtsP]DbYd٠.ANAo"?n~^uo܉d}h }4Zb+tm i,!zԥv3Hr9(P"Q iրZcztIշ=H Hz j^m!ns)5ڭIҎa<P9@1^ BٗjwXC-DWhE}:#h[cQН%M@G!ͱk,B$ fJ2RdA1A2~ȃ\ 8vGyPgUA %CʰZً$P>IYkD/BP DEl|ۗl +it-H$Q5$YN$eoQڀJM)^сڈ*~+XHhZvE($`; /tВ)&Ő-]&nڹ'<6M ,|.5|~r$Yڬh0hV$LC's=`&cӿ#I"͢ԶC]k Q>'I"eCKBm1&]$r{o{O*h% xKd.Cےb[Qer#bv=疶{r9u]gd*2(eԠʱE =>A(AzG=fYaKl+OcW="($'MdҠN$\r;XN* Q ZzCE5FHmXQTlz00Ru`h\Qh;w-$mDr1([МNAk0ە b'E[(Ie2 b2E]hCmr{:-;ҫM&zf Sb ܭF:ڢ>u" (AF6 O(I{!1?m)AH-횺GrBBgp*qKZҖM\ГFn5E!vކ`^c6f j{3LH1!ٱ4&iPSJ,豈+9a4g*?!t+Δ<.8!# R GR/Ze?[RamXA%Pg5(ND@MDʊ*ASw*a~ ʮ5MJ%O'~O6}wvyAfE9]\Њ_|nzӛt>@zmnq6|ĝ.WWȳZV8|-łXlcw}ܱlɊxѰts؍_³-g9qxlXHNB=EhgJ?wUxzɸH]=n\_͖6N~KS}G2Jb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v}N γ9 z@@&@eS:J' V*v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; iʺ&'*;\^p]=N zN M tNhe'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vx@(]3*r0F5N E-N 52N D(d':P9 N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:'QSzɏԔzX^m._/ [? 1x5'%?~,h c1Kl~.X)Ej3BffM1}666o;d E<iwhJ'_6&gݞ/5+=QټS%g@%JIWT)i}U5/X7P~ruc~_Óz]2!l+v{W V<Lc]K޿ZZv(Pl 1O׻_7ۣ h5>U7|=QeiOگ~@AۖO/a\e͋y9n7ߜp>r~6>r㕎B:H'!ǓmK޿gjw %-]"Mm=~ѭXvZzqӟǡ5jn#""h\SͫaB{Bơt*i؉zVCYǡjtE(a:Br ++JTCW넬m|1Ah#+ 5PO1WS s~tQUj g줫VntE("ޙ J]1k輗}gs|OӝN>ƓǑ:gSܳgH^}gM鑎DlZFpUm6!zcFpbܢH} N 6lnskkcMX5BԧU|_Ԃ~bSc7S7?qPƉLWzi""VTCW׹ZІ8u63]])mNh}5tEpŗNWR3]#]A[9g++^VCW7VS mPjtE(e:B2Von:}"TCW7Z]Z;:앮"^NtMtfX ]Oe`uut&`)\5tEpu5SօA\Z-d"d\MW|phk޷ HmH>Վ*Wĕ RjN"޲+TDAR)sjA D g2HrI-$?$!od" X/e7Gqeٓ\ FNLkB+/ V>0R"ƳN:=˰RNHec\ 'b\Q2bY'1ӒOIO~-ty7$AT֓H;y1@;9-ܠeb&*6 7$pC*m>#z;B8N0{9JzST.NK^+qX+FB^6"Fqu>u\A-JOz 㦸BAk6"v_wFBq5A\\(862R Ԇ'RZq5A\YipeFʵ|rWyHW4d\MWN k#\>W$$\Z/E"6O+Pp|pEr+R GW2d\MWN v{(N\pEjLWrf\MWA ;ƔXd8·)?W^8 ܃c14 |J l6pZOI Ϝ1n\h8{/%wY/*N8؝.W.걮9 pEm᪗kGJ/dq4pRY8F+RiEqt T\\-Ԛ'BqeqB2 6 H\pEj+RCqe-]YNO.\^Rg}</Kެ(ݻ7j/\әT+Z(ʺ3¶qtr<. ]bc4߶ؖO~ZD[pO?EwݵntO@oVX_5:l8-;kk[h.u皠uZp&ДDnqzNܐ<wgk,ՇCX߇<!=of扽8E*卟&N}{(Jߏ;gGշ\Jԡl}k ^i*+2Z{Qrn`cx/a;Dz0׳% du||Y^S~cUbuR\ԏ=|^E>՘>ٸvs䨎exN9 ϲ25KhNjJ=4'A|HW$)6"^q:Heș)[YEW(`^.eeRHɛ+p'\`О H5\pEj}ʨ2 f:)*ډH.~.?;K%^.:dT MO!+]z5]c/7kES\emعu;T!GUo[8PRMj6IO~ (9J-sK`9J>'w߸S f?R0>W>걮d+ P HTV;1Ipn[Jy#+Rku".d\MW{i#\`eDW$9.B;F9H2Wĕag+=\\pjN>wE*m NWVoܶah-# `l0Lr Q-:9IİZC`+ Hn\pjLWrpDtp!ҌpE\ DW$+RtqAi ?Eq["ZpHWWR#㎤KkV)i7̅Q.J{^S {*TA!c"+ K)*6"V+Req3 .+k$\Zv+٤IѩTڐq5A\qZDz6"|rW[:H%)*8) Z-vI6= yG{@):zc\ IJJNNZ*3H#€ ‘gZ@(ПfsFJRO-J#rj/Z#]B_g%j}S;"p2ziP1 He+Ru"-d\= 3Ig 4qEr+TKWR J,0•\}qE*Sq$28{>" PTV[2+kAd+l`+$\ڠRFg\MW.2j'M:He&+זּR(xWPlW$I l+R뛊7LAՓ \P3 JWv똛Td\MWؙ䠚dbe)%upN>?n؄$c`>0MjC൐y$1 x Wp W(ױّڐ|TId*"rGqp=Fj8)QtՇ/f ʋO.u]o݋ealUu+NO`pnnxaإ͠N7oDW7^Ο-A\,HP\,g^6|Wm=f}p=)h!C;4itU4]?VD ꩚ΧϽn7nBZ1q,qX<7lqv9xG[_l'4c\[]柋+50{{߯_z=C}{|=;߯hw?|2}\P6`"@' #lU)ȮS_ z=͏^/HǘoH^ᑫp˾{s;!AFI:`~Q\]4Hlv 9'tjR`Xu񗼉1-5m3Λw>}BnA=!k?ɃYy < :8cYe߃aP-~}mv=]L*C>i.its :Ǜt謁kկWH!^<bޞ2 `¶Cmzȁ٢0-eoE9Z߉708YyZi'_z$O?׆nф]}bv4m̾~fëup?yQJֺZQ7T-*uVAtmjTABn@ME1iT-xy|{WǶ͐@GS(.0GȮȯFg2*HѣjVAr{NU ZܕsI٥wX ۰^>6g\>^vj׿:y{~nj/ju"vo>><9O9k֦Kn!G!qE)9QA[k A7tM㴯*II_vRn]Ҧ3ZoUz\]p^xZնw?VXPx5go>JZ/71gf;2n^Z 3~gC&>SK*^v* SR'> ivڦw&kT;R}q@}r)ٴ)+4V@۵]Dk]C#U)F*Zm4Q# "0Wq">ɌRQ^:gجICd)ѳ$Ϊ0.JBKAm}<'KϋO ߎA7Yd G*[$K0}dz1z_V;ûJ1{L"Ӡ SYոj]Tke`o]mCD'\:ݚ[ '$rs_M`bگg*'fx-gF3MTx04LJ8ԻM|:+lP43q'ݐv|#ˆ [?_cMԆA d zd܁x!qGX3A>2#K1VH.9J3ϭ yF&֏, %g$dbCf 9&;~"bV;$!ӤcV7A:rI16{c\Oe6bJ6ǘ5zi<5ze2oYƬ~\!~c?> C bS adVmq^M:ǫQiayɲa$ {BFO9.('NNߴEzwV툗VX_aq nZƖe(E%ZCK9nB _UsJz Gzbnf{gYQ,woD[Y_3JWPugmu?ᛇ#&iWmV513Qy4Si ҙFU)Tb't .TAI퍱f' RD20#iy.kY+.Zg}jqj993cDE#o- Ӥ'0#ϕQ\qv c8c|`7,1wqk\o2;=l=h,*ȝtug5f&F`]r]/ E_u[Co6I!%:VEy:ǿ*g]ީ'd",b7%[yy՜ Zr^~pX6B>C^-GFSɷV=EkHIZ_`BW>RUodEb8D򱍗^; -<95>SsB~ރ]7V $X^n;؋O9ob6Vn,WMWmpֻ |jf^݂h;wrlY0O~=Ue]\,l2(AlW7^A 7P?7^]otؾǛ?H4s9oA<`u#YZ^%j--Vk6YX+ۯI1Eu,_n;.t)H=uٵf,/`)Ü~/{"*\fJ- 42[prhƖZ)GlYikxtkѻN\-* : W"O.W0w@(-^iB,T (YB >\^9T8 }Y/A`SJMW;@sȠ&lm#D\fUYK<л}e.e PjyG#c$=_|6?~G s>{eU-'ů|.^*̧ϳgƗ͏r$&Yf"߾'~vb9c'I2Ȩ fH'#' .2԰ƿe5g*VYE" hxYJȈ@'j{fTݶ wXGyϸW_p42Oґ ,Mbbn̛ɨS>YCǼ4'~Ah^S q m?|ge-rj!DGڴz $Hekŵe=ō_`H(!Z+mB8N69 1=\6,vG0 xP]2\Bald(P2ּ ̷0'5,7wD.[Fv{ ]Wm!ZV՛](L_I0?SxB""} 7dϷhk(…A_FRZPhla&Y GlBfJH7rQv$gzJ9f7ig/,+FZ9 wρ&%E]E 5VC%=h9OחZje&t3R1KN(TXSxi`0Lptd/ SjV;ڍ@cv,MƝɠ:"[IC__F ~\ ̓g>SXB/MaP,yz^VN4ڍp4zA e 1aY䣩r$Eݔakveœ?k@9"^q!)"kJL F # /선;C!TSnP#7 ʩ[E@5ڢG«%:3!l꘶7&nob_bz劻} %Zİ'tvy8axENI^"ЂeRFIm"F؞'(p @qM~/j_L$R":׈‚;i)r!QS )i9)sI(/o z3ׅq*w^iEdy NH%AVQE*ȼ BF̲$Gk|veTkb-GI$a>ں > 1>K4iojf9jGAT ØiZx:DTV0G@R٥>>6ͺkmT1YMP؞_@3~QuؕOvxjbzng = qif\B")\b†5lV8V<\IBNMY(Rt 'Jh445\ֽ-#-nB.zEř`a2&bK2&iE(Y=/Gc5/UScEπF7F(BƒNRdNk(I Ej骻7X*tQ5fP@Pk ~0I,"vC,XW܉ nv[`J (s"w,? Opq5 <5 $cNh|Zq̉O2֗pc`~G祠,fM-M \24̅wӡ@ C;c" (&wk$bfn[M'jUnCIu ~_N{Y-ܰF?]ӫr9GJF6qBR]wAt֑yp u =qJ?<1] ,+ .Rrb~YEָP8LН~;?rǭM%cW\/e^?|VS; RB_f R@ZsV/Cښ7AC\]LNb:"JU6hgBw!~p*2wܦMsby҄a.lZNұ٫ryI^v?t16x+2ͤ!L,\^o !{nc!12}N,+yG/(\B?g ]+ 8ꟃ\SX4`VQ)aSVۙ1`S˓Hp4fb~"qpYC0!o~UkRTqV:i$(}fA0Fozw؜{z4]tZE[2xE*znzյ}ѰuF_>' iUΔOVKT&Eb_ )E*t#F0vx8P`ǎ}7rax|N./})fι0M.^66~ji2Shډ-)#CwGb6&8BcdvAѕ*|w.N}$2Ԑk( SUPZ1 b($8u2D0!ٚ)"<7a^k@@4ͳ1IzC3QG@jy d. )˛)D5?ɻ՜B5#gH(U=Z+cTfĹڿ7+>NI^@/+:mV2oX[_%c/6L,:mVsݕiUmo-[)hD@jM-4ԍ<7eՄ(X1vx&|K}%j,T1֞PjWa 1I}%76a-_Koe#a> 2aܭy<-1>;+`Xv]ax<}hEÚ}5 BÔ +eS5 9P}uU7?IQ4Pmap f~V;7nqX2q=a90JөZki./;X %.`ZwݺSX·biX kS "A(3{5n̮睱agޢS;o:O Xb zJN_Śa{tׁA<%=|8,W\F}wOHc7jCx)cW! k8 'A$s'l'ߓpSZHه)(*p0ɳfwwZe#i (ޡj1mev dfW( i({f3kRp);A~ABUcrխݠQ-rnX$s@C8_^ÃDInڴҠwzaBx&M6}UZ4 _MWX ָ7!ׄR'Ԅdչ)Rza{rH`C6(=G}$p0}WdR}M^Q}aZ[}>2pRT.%47v= "ZίOTP8!+gHF{BŽkqVu`!H:.~,50& 9I x6ӗR$/d~Gy9E]eկ*^e'+%ԉ!H `ٕss׫a=9|OC= S"Jv)*HeБ4@P{y6^6ȵԥ-p:$\_ouk 9~%KFuE\RsίhPOǐ /Ɔ\boBLq6x/#y/ CZAK>qu{ϙnHZPe  ƬV~NAZ$=O;q ${Qн",c̀\nfrKwp`.px?0 1lS2:W AEh1\?᮳"]V(w8shR, jt=9Cem3:BRWzJS?:vs"|HE!bd?+qkm$7_EY~ 08fr&WMmْ,KLVKju#O,ê,V3M&6N1QPYXyHZB¨\L"F6$0 HQ?}|}0 w~o~D&6X5|0 \7K.{+Tj/\?w^U# 7p nt%*>_% П,m0Hx=&Jmt<ѱu 䩁L>uHȢ@BF!}tzw}oɡtGjG^ϒa^X\z|܀e# gLJzVٔ"%Q-oo`N|:pЀ_)]-uHǧ((D{qK>%yKGW5PiTNsR?= +=ȈL[90 SfbS' qK-2 7͉dR*hD4BZ--fG*-q-j a;N+xҼkP$*Ͱߕsy6os뛻x9Fܶj\+eVD~F,.4cct= 28A;ߧ?Lт3r:4Q6vix+aƲDB#nC^RɖHo IQ0*ZEQz{j?0θsJ1zU$֒tNŔoӴ0do]րg1FlR0oTpG[F!Zez5L l:lڿZzԶIG?ih'D._-?OFo2XqrwY=:-]a!tp7_{d0:' sľ[䜅/?NF~6I#(b~~ X>|"acbYFwSXg[~GOT b9}EiEB<@arK~[<Ͱ(ک(Q{WN ?RE([D) iܘuN{cAbZvmmKQ,x@iqME/h,(%[㈐&!Xmka\i>5u6iIۜ jmRScؤZ y,ӜՌ(LKW&[OƔĭ\-.|*;uxv`B "2`J)_]Qq0~:Ék:+_9 vto&qlׯ\iOB =_m{mILx脰Ts$&Oj*čÝ3*;UЄV}Y7OJQy3 4*4 I mMGRP|nLBTl`0޻  6e`vJޭ& ;2ebY(S lF,M)/88@σLh) _-j&ɹ>/}u?&@$O׍&w7.޸^ϥ8 N. f b"23T.LPop _4Ke:~,Wb p&'wrΞf*:5rr BczDbw;|̦xLWx}=k`Qm61 AdHa20K@U2&*KSb^]V') c,Drȉ[N~4 4!}W2V4.'yCۃaH$nMj! 딢T~1dcA#<9P 3S66BR:6G>n=k)*4d(s\ R6kW;1miT!hEn0;\y'8S7D 3fUи P{/pmWF^3L(-}(̌st:lPB6c4+䶣ZLZm23͋&=i!*k8>%&} ʌ'v*hMt; mkנ l^F\5Uҵ5JFfԙYAN;*֕ORq 8j)˴ Q:/H%) >aS^/Q<kDD -X1=Uح+ >hI!3ӾS1A4^9KrX FX*a`It) цT/l8$e`_yg5NUn`>/xbij| Rv*h\(zp,MBS#4$uXdB!ey օיlm~@9tT0^ME XlJ18s`9'CrаwX5GEu&lxfwЇ;Xt4t4Z?(a@G O?NS sC ߻APkO`<}tz7+C풯j>dwGrRQ`?~MSh>ICntçyO=7 {D7l^~{Cw)/AgǫB¹:vѠǽl_pAڢFo P⢞(uYz %c`!ZQݡ#J\oF֥0?G8Ci{nDU Tܸus-&`\{SwJ3V=>}s7ɘq)%RTD9+z޴g!{w<*9sJ ;)a'48KG,^+p+hDû@ Lx7NN4KXTCsIJvyE~Lөم5PEt{Ay6- 17 oczkCs'X`=9%”jGyΤiid`q=1^1eeŠvDܟk`FxP´x %̡Db3֑4ijĢ_Κ3Ǔ`lh3Я"N; Vq0v;-i+V/׊+#Ưb I_wU7sd|"9bK2wbzodj%16EЎ aTܣDs c~D*;Ojl= #&E ʍk$lDwEEFaU~ϵTt# 3= V0Wx7ȱ롦E5cTJ| 8)TPX ? UӬf Ɋ ~ f .sWcZxa{Sv+fYJUp~m [eP }=0@WP#`La@K-e^NM 8o2-ଗ(ܖc82)qMH5zfދDY #_BQg}޵6ndBA~i $,fiU-[vw)YiI͋<w>Vwj͢J2.:FFvvȌd d g B!*zeFen`tU>^4eqx@|u$Bጣj^{ic+Y` D9WȸbgPaǩͪyҮqwI cRG&bBH"X\IFCB~iVN*YEliQNrQ[  `7oxfˮH0U98c!n.p83d"Oö2|3$X13n` r8a\L8!$wc~ %ݷy9A}@lOMi*lQmy4oyba:BLl+7ڠү]ńżETBO:4d8-eD4aKs09p)኷}IX8K:__}EN0L%η]R6(fNbx$?1$A#fElEwJneT\[2ܩr# X >.kնز c1onq*z eoF|k&^/=LAiS0k#*d :_UBG#:ҷu$2RG." nE^|*,ٲWjvtFg7a!BQ.3dEy~gjμc,Ra 73%XxfWq_7ob<>ѿYߒݾO J`>-VtpB)]n ~O~dpeI>?_S،@Q:x^/W_cL5.`w+]?^`/-x!HSpxlDpᫀ<)A0]4M"Xfc6] y ͕f$'7 +ȶQ$ LTx.X>,f0Q2Sn ->cMfloާTѭ@ l'+/V9<\\xM=Kpȭ5eg !6+q͐SCJT w`0a$Wx8>"kcx6ŻJOllF FG {kzlT Vt5X!c狓x>5BЍ?@ȳ<1wFoXPF嘌#8F* 'p!8p6hr0g|:yŗ:2m" }/D>wJJ^ۀw.NGհN@2=9ӳOwcJ̌;O EPӄ8&3KEUh>%OtA>|V.H(y`JL3l _2c*bB_˜"6Қ$s%YE+J}pJmcrE(FNgh Pd> VBJUd^Ƨ5QQ1Epz:y%q˞`k"ӓ z"SY6ʖ1R٪mgWI:㴠5cN]0!RhXD`4fpF*d_GkB4mZիA@HheQ ~g XBs"4ʼnT '9S)m/,_0 d$2E2 p OoIk]ؿ~f\͝ Uȸh_^ (Z*ڿbר_ *WKv"FoBƠ&ƌ[jtSbi5;M̗W LBT1!(bhu u)rZ0$㤩 H>|u~H aDЈY[^غ6ɝe!P &ټ3yl(v}9w+Ɵ6ͦcmrp#r9}F:Pa.2 Es8wD<Dq%''9^$G[PCaIfApۓ24k)*:3[Q mzJe&=.X47!l$=ik lqGN8=R:Fuن unZpR#b=`D_Ⱦw&=roӝF6Q0Uȸ^#p9u!W+zb^7Tw{堔knsWTtuїͷrnoAg(׮0z{_*lT1z2RnF(Ź]!c܊ɮP|h|U|Y5]sؗDT'DAž.7**X! N!qzU GIO?ڠ_8lvÁh~O376wAG䓀K4x |6Y)ׄ)7~Y#?%#pÙq_Ϗ^4'5Pg\=fnr=5b F 5?{!4q>M4,R%ne6MŇ )Xv^>11LA1 fN2 .]_~vѼɅ\9ݑ 0Dv`I]+Zvg>-#80\OHM5[mkkEɤ%(Ƒ3F΀;1F7cbegd2qȡkε͍dom̿ ] "G c;N۶R<p,Rm1mgFVG;v\tln>mp{] &_(DFw#3$ Ee&6me춫8\v#yG{WoRmie ;׶=Im9",򍹋e@K*--s8.lBm)?}z~nup4۶y8ZV]*R@נF78]Cq.ڎz}Qr`AUZ!MuslRn# شID'MR\Hqm&ц$ 2lǻʆBaS}M wkxڭV$gV>#3apk]$Z. ".]^n\ӎ*G/Nmy!M39J)|Yx #xb qvf;|Kx"AM=rϼ !Q&/grJ-q\nT N'=4lZFk~c/rFfQKX F!%"ʔ Ko)vY2% 19k+&Liؑ{g-3^YxP"rϯjPT͵rJRz&L"b |_Ա͜tz}Dr.8̬&E9#}j=Cײk !1 YFJ(g+̬%ӕI(ܷ2%SJ)cE^%j!^:*8BofNq沄UeBe 2McVeoQׂ˗IقC#u FÙr9/\O+BlkQ|im0%~q\ju(_rɅ|/,h HXtSI lDa`_d_ V~ީ$-6$%3`r͈{0zV5F%`$0NXej[ 2@c L.%3X #*T/gM 0-^<\WfiVs+ur _wAӟ|IȚvie9|$p3V᳻{!Mcj0TV JH鄤EjCM*dҁ-홵9KޗOBޱbO(YePFpĄ+opK3)RFTx*-x%1hEDߖCYAz%樸#{WG6<"A4ݳX`lOOnmK1=AY*9tTêR2I#/ILxfSOڇ}޵o[AB:o>cVqO;}:@5؉m޲Âz}YWE;2ĥÿջUrBEswu vHo0fD xVd w6"V6_ߙ|u}>}-X Y@GVl<黎y2"sG[ڟr\}N;Fk8D&/eud:9$mr(i&/ N,9ݠL5c<(P hjGԥsgZj4Zb/5h4L,ts37w6x=( 0 ;ɭ[+iSbLbQMuRP1#UY,6 ʓ=jYۚ\d8m t+98o+Qkl$ՋOLҁhщjCBAxiv,/}#K0,>~vc5t6"(: ^q0E1Ewݯc|=4=˖{LqM?|8\hX;SڝűDYě-3Cb&I )Y,d3AWBmP (2'UpJӬU&bALj #Iԏsm~SIzQI s6F5A]rqAƣk8>d %QOi@>FAr:N#W)W! c m1"_?i9Vi;?͛q$ %}CyehuP 5˄fsKEam *#趰g]lwqڣk8xOEIt$BH5ZX_ VgKujYHN M4ULڏ2hv8Sikg1P@ja>4銕-R nKց^,'{a/޿/9ܿ6~bR'0R6>~pUTqV-X;f4"EQ,VH1쾚c=u~k/vrtDΛ}Mp0= ̄Jρ$P_0q8/Y!Jnڗ" ?oNyÖ=бGQ룝wޥŤW13)fk`jYuHnpFc.2Bs^ə>5Y= ^*3?@Ѷh-"\ Ck\5f7ohE=ҪRS 2J1S4l6ۿVfHPljF(Zݚ(e({~3vY[p#0By*#mF<pteUo1•N,X9-SL'ERGWJJc)8.~8#O:+D_9_0[*pt9;ͭQNx}P/bʴ^~ACġvfn,ۭRa|>_ߣڕe|qE޵(Ew-GB#`6;OM]M(4a/4i$ N>ͬO$jo_H+&U=0M-g(>Y3mfnByZ%m!ߢCy/cԌvk V4w5XT\Ex!V8ubȭ+5EA-2YeȢu^LΣv_LV -\}@iuSVp[RQC "٨AX/*ڠ*aGcZ{\I.r2yWM/`Q-QQF{sOʣT^'+Cb|D$JX@ Ͷ-^\Čvs2 Dho1cs*F{gw6;$!ȌI]HX%pvΎ|2IZ~ג@c3yD}#b3ہ Iߧ MꁪS +t $)L$"mbB69L^.cpuv(mʽNdQ) >w߾nG _)?{ײGdn i n߶۲$wOP֣\%g)KnYER:+3H8GE.]fMSPqE(CwhSZ􄩻L:ket'Ϫ@e˛)GZvN [ 9r5]ՙ(6zdrQJBQJrۗ~Ѣ_5;xJ |{?Q{oRfb5Xf)Yncaҝuweu~[xЙ|=t=9ua"haL+doyB-JV9̄Ko!I-*k'_WJR+6uKI8Z8u%Qa>V3$^4=F2 +U:`ЬDX=|P #;{qvDQ[UÈFo@qܽ! b;q[֙5+?8",Eg[賥;,61 6qvJTB %U>uD;>o rLFpX,Zd+X6) T6mI{*:;.!5^ Ix֭\MIJ@007~"1!ԥu'85uU&+5*Ќj)%Ҭl(ͫq*񾌞|e|^Nl^N@ oFZ} o6 ;/Q=6DFҞrRz)gS叿\m2vg8(v$}$_ s8Ǡ-YKjXOB)k;.IOƑԔ PkDQ"AE ܌s9FЗ]1l$EPAo{95+gMC#iٺ"xii<4 rN?=-pGoaV@N= CQ ;E ΐ{tSF7W d%? , !ݘv]@ކ?vl8H*hl2%{4j V­Ǒ흻RҪ94Oٷƫ jP犺4(3ǥ6Ԣ_U7'q;^춞 =ݾzj|ڞ}|~gz٫/@/ TE3__=g5Ƴpg:@_ψnK xshuc^^^׍KS_n@DoϷ=/Fc776}wʍ~R6Q\f@]XrH߲QěAKɹrm%֣O B$ 8VRSrxMuC)zwk{m b8nѕob '~)0àxw,50nO]#Л#0[fϔؔftP!HXRg^HuP$S][q)v{659e #=I5}.QĠbA>ԾG<Ω]^W|t bx̦e:E$,yZB:T-*D(Abk [C&Z'ʣ}Ӫ6}+Ma0ڒDy%F?gPRQBپ!MdHGm̦tO%ԛR;da}kWV\S\qoggS-ETKΎѩBGf\ 5pX^kE+9#IB㍑0R?>50g]zUݸjـJixja{"ϟU=<áaU.Vtwxix͖ x9'y`8@)LA:#elv{w1%>aZ,l\0W* Ö Fy`Gµ'R`ѿ E) VUӳ)cZ/9wә%T/TìFDZ9-]A3ICBUJId0zZ"{g@-?bK9幷BB9 27BJg-^ro @ Iɻ*1/:ХF. ՆJmTlv%Zm띌Z8f*!?N%nٷ=ڌ>3`<ѭffۑC)3jXPlkd]me_5ԏ1/G'6+p`V \{kA- P:Uވ/C;yOȯk)y}cnWq uVS5qNo裹F״rȦ~:[1G*q49B5׌MM@SL:ޱsQj'x׻m1":.lPT 91D*.Ph4胂糗_0~xw6fS~h{Z'WL j &ve6Z~>u\cq 5*& I WH( ՎKer|Dcb7Kےu PѨ=TpugYwSDG:_h+ 6Q?VlRkKu&u,{w3_EO$_zz*r UP!'e-* f0_rJ?/`s&ravL}(9KVl>$ NLMU%H`j ]O#G}%TWzkDڈqU|BGh-zVGJE.8Ra8f6!ԝ/iCh{ǂMF5vz-]Zrxvw{s?a$ ?r'F 1wYfѕ-/&#2Mw jc E |mx:+!Tل,bPGA<^rQ𕕣{-U" ]*#[^/d[{d`{ ^..jlѱB;%!SMlIz=Ex]QK7?XF?̈ $~dP/H06?\jؓ4Gf}n=Ef$w=C?0ur*Пd["5l:5{&[ \Ӆ ygd [u!EI*mc!ӯJѧNR%6UZ=gN.}nQ]W<5´B #B`umȉ{p/paD8$G#Zik mw (hDg5g߈guA֓J#Na'Y۠/B7űV(Kr~rGD 9uOq6Xww@4n`t>&8HUwmm$yAv_csނW_VZZO$49H %jtU}u/l~;v]3pad%HiG!uoIJW I. 6j'</Nb=0,rBIf ʁrۇ#\Kn㛋<0FWi_~[t!#‰MvCGm g$n<gq3ØF{JP81fKV=.n<-BY7nfsr3ǫ7ܔc&̘[̷O%ګ~H0 Vb˝mI`dr12x BkoaBlMJN.IldG)CezBXkp8u-jZkWKL"zΦN(d=:R&0:ˌ+lbOʮW\Z[O]|g$fzv]\n!BSUƜJ7Jv*whr"P˩<UֵYО2l}z DR7a)EŀMhDe6 BJj%)t!3365NMbzC[ays_gӢo "};SЈw<aJV X@n&Ğj;3(ؒ=yN:G_5*眕 W=Aj~s'H~Ѓ?e *>n*;eQ&BPL1yJ# YTM\"jr<'`fK4W9'cngo=Fw)ƻ+=N)zIyHw0jƃq[x3}QiNx.XWK `U@&@UkBNua.MKH`D@tLa-%<:%ӻrU  f5/Ϸ \-vdbPiI>RF&k-!pSDlN 2")M7oFB=ZCA3 s$6 p!Ռnx<|{jQh1`7rtR{z匊bN6…ѝ=7LTl*}BcYZ{0(jjAT#{SS|FXrڟ}eI&b8$#wEa Ф3h#2v[0J^{߷w DR*\*{I`lY1BTjwbe{B #5D Tŷ9lGc%_ȑOzuw1]h%~O5*I`d){sCz-$oM ps M)4 .~qxi&全ٕQ1* F#RS8KQ_ VD!IHq^?.?б#/$oӰ*aÈFiT]D0#5!+6Q*`!Kjg.>F5,Ϟe]Ƀ B9N}8=jɓ90I&Tx8XrbOfKYHPz]߈H[dnI0=8"wjOd <+1qsIz)!txį`ͅ#5Hݹd521FŁ|#9[D2JSxL d)ypPJi]^Y;zTj]-#z-]Zs2.:}f3Zu(woR]PuZ1kUru櫶%fS6*ɒ8 W۫8^1n}1bjɵ7+ؓF@ջ.lVfw%'7w߿9$P.})1 ̙YTОFnwE$s9!hv;3<0e-XLPa !DVD1VU/ D|R}Q^0do/ %:$>bIQl (3RPz$B<_JߊO7廋7"FkPЫ7Z5c:W9x ߦssmYfoog*;k󛼳m+ݜf=5,0!Nmu]so~IW?&Z dŸ$v}CWA#d>t}d6.]Kbה|AσgJg Cioĉ1j? 謝5!w[?%2؇E7~ / PZbo':,["f=:t)Ĩ7SvQ԰ DڗHmɃ=BV8jɓ*F CA;zV욐{3Q%=֛jSjӹZ6&BjKҩa=6| RPb@],F}X<6cs|[_9 #!VK~͂qRԯAݖ*r"xds?+Ђ }bEP*9dzs!TNQhW9N/\-y"=Fǫ#ã^D(c juMȉ=v[u>jw? ,LhzcJ[Ւ'Pbn`ȏ{qtCȩĊ<͸Mc{i󇤊O_"=n}[d.#Qz/+f?9|_N^Z}b& t[-8$N_:e7S2PX/nJjG š6n9". MsxK&>m&צgi:i3sfjZ'W=,_*C [6!f6$M KɹbmT|RzcG%ЁUr+>mX'plԡ۬(b}_JĆ`t~ae6 Px3oERF284k2B^sqY6W>~'y$ hk L^̔+Wv*/9j%K+oݍE%٤^byY@"*[-6ךz19d%\KYjDN-88Z~4P39ѧu: 8l;CRnȩJR,Z 9eKEcO?nnd?pqu'W<ѦJ yZأ=6zMȉ=}Pv9[;AVWc|㝬1.zg+k1%pΊ;iuGȞU[pn7Yʺ6Oo_-$+^*Bb=q:P-M\kqÌ{` vT8Vا@Y%ykT+pq"P9IW\LI ޴RQ5Tgd1бaPiu"US3t*\L%d19z^1aM^+|#L.$Z#|<!"}NdMZ yl$JҸ }mM>뫳ީu' CɗIYh@M (cY%dkRQeӈMicJ@`- SR璋~(jWxNмa4}7XQ/I7h}jץdk`- Akc|#?eQ'+Zڌ2ΧX!̳dЍq)a ~KfG&^YPѹ䕌9m{cA&4jiH{$!ҐUГdĮO<>FQi ?8}o/cc\ Ap>~|VjC̼_b#Q;ZSBp!fFQ}fRu-W >Vsh*Nu7X lC 7AvUӰVu A5Ϙ±A=vk{ '(%ǂu kЮOYέjQw_=#G+R:٧H a֨>k<ջAvu{)h'*֚A4ڶ 7^N>hUPڠ)6tV\5%kg`vБ# & + {IwluK~h!oЋO/6a(傶ܷYZ䓌Nbع+%L"vdVKsY̪5MCŽ|/nӴXUw|M9v %rfC:nWzס?ZTzM'YC˵[)T24F Pzr~0݉VL.oj'{aZ#j@˦&]Ce핽M*_N~+)YH@]E]&~ Uo]$V'q~JS>nOϋXƷoXK&|ry?\o}`9s*{jtzD-iȂ99^U MO{6#w uֻnǶM"'Öٜ+;rRS$_&O29ҽ֡23>&9/)JxcKWط2v֐Eu 6O2Ҽ2}=%L`D 0pL+,:'(7l34̌]OJ;d؊o;&z ѹJ5S8:xۄoُ Ŷ *c4(U(&7g'r.5>N?ߪ8U>؋uie; ]Kz`8ö1j'V70V{4W#Ъ*ډ>WpR k^u S8|:Qm~/擱1D}kpj,;t?̟<gUHg;{govu:*БFQ~/=\^^&gDX&8Kwyg.kk*N&FXvKIEYDq:Aw ~Xk[3v@_1fT1t3eL $IGdbRo([41ƨE:&ÀVB>Dl"Ghc W}Bj!q{>M}4@F`::$ A{<=@EbY&'% H@˃?Pf#طcw!_`zp"1 VU!"4"+?hA*,EQ4<.2G@ Zi[Gr{UV9MMO]3q1u#h6t[[6tIGp \C{"vhv%(0' j\hoNqmv=pt0JpvWЊ]^2\^@+V\7%ѠdsyQ}t[N b%vĆX9!=jnyJkCiզ:&MzysglЮAժO+n;ޯGq8)ϿBfbo?әO^9o.NŧL~26xYa i"6XVu ?pXTl,g9΋*{RWW7߭?ě_gN2H?emG/>m@rVa3_|qa(G'8J|H7/= 36#>V)^opԹ?r%w~&Ù'Sq%2{h&2퓞 mzyrZm3o-4?aKҙ"/i_s'hkيgqo%mszc_U5AXϯht2EcKr֮`ŖtdEi3 Z74^[cȕV6^Vk;r54h4oaYfA%m?cK "٢ >zFg6H 0d2ZV<\;O.+ m4ovfm7?{m{ Ra}l9l|WjZ>[xf!%9*VeATю?|ù`Ѷ@yVmnl(6۹? B`"h$DE< pvܷxb vo191q`1 c`g-hB~7c_8bdQJFǙ#:2#u6NaXw|ΎnZҀ< H-X4:$<Ջx/\z3:$ڟ1F"2DƘzU.9=`HYPe̽ua$D/=s]f5vo>Q<5b嗫AN;.k`0o86;! r%׽Lt&\!Zmcl~ZyL16C+V 2UBYk хFh>ln]Gq-~\g<jBɊ@WQ b`E:H06XDyγ(}y{9Ecׇk ssFEX$n~pU sZ6-<)DٟZukDq/p KY2/9;ZK3}Vt']:ےՙfzF0[n&pW~[,R*| i?O䈾( ?{q_.y~W{N:9 p s5I_)pH\ @9ӏ_UWW ` ^k .8K 7Nj "I$`HF/} _Ls/ Jxm׶~Ddt!b M>Ki}촼n6;XvGYFy@IN #$%MqBhx?)Y*JR982 %rWY2ɑ?W*?$C;}E**_?$1҂gWwǒYZYe/7ſa @<* CEȭ3>{?7&v1q'cs(QEG묉Vk*oADcȮotO B`S9pvǞuv NqF() #4; X5bgT:_˵`N/S5b:Z't!It^8{\a ^dI2cLl2υQ®~?m6<zy)jOft{;)=Ik":@Lɤ23q==#HO{t0;~gt&*dl{w+W nߦX,r mJQG )>W+V}_מPLf>!7Ģ슼/=qdT〈ȩ uNEUϨ>n{T}2z,m?2jxIrC ,2\bw!yxDL~ʛ';:l97|P/&s,BTQ 48tܪ F'ybvӵQB7gCRPw7RINֲRs'f_d%KeW9mP^./hl$Z@Wq#iB$˩6M 9t#Sr)S CyhQ}׭dF3#sHC.#*Bh[ FcWvr-њfף iMXГgZO\R%g>AX]_PR[Ta6˰& 'տjW1q!Cd"ƹ/nïY'w!xtD\Ng3u!_!2?~-Cn{p|L!}fTF~r(nY֑v!A^/KJV=Tg* @J*̅F'DŜi@z Z3&=I-&zۤ|2% RrEFi[CvxTڠ)Rҩ6d¨hO0c("A~.z#%7>:2M컍)"W=nvs ?+'R %𢘌iYH\8rXd9*HQx0ae H VDA`*2"\6zԐ%ܕW:FJIkKZ#@[M3՞U3\je gpZ;6U؇؟k.Q*9, ^" S` SAFK2} 26 !@(3pp1NuŘJЈĎ)Ԏ/t%*Ȳ^-7hyĴ qW76HkP s84B3Nd9/R*e\WUCOƀ@U@Pނ# AJeB`o=WQ#NwȄ{&V=%/m+;g.8&] @2h7Qg3:mbZ]`ۗ"!T+57CC\p?*f9tB$/)>Շe*SZZZļyOL3q$sD]gJ0™ h~<8 mD )%20.\e E.XH $rQh%R!O=p(bmVaZiWvk8Zt%a) Q~~OI(@waQ9*@6!d?#*aDI6`hJ:y`9b}sDHR[[( cl\JmurUX 8WN*:XejD)}쎅SkiM_jڠrV-uL}4eއoQp?sT\g8>!gONnZuၷ7'e#_ k!MIt<#1^Ɠ٦phME#3 nΈ4+@\VO%B(PD ~s2hS *"$%M6(Fvh^qQֺf RR g-q%㘊izkUѧSɣN_ }Pu@~ݓ@WFuw{Z%4ydJ9nɔqU=6_<2%I96DR$Ӓ2"W@<_-!`moHGKkK#QP%1"ZM)|dyf` T.7K$XIFoG'JL^ :F>ah .pBb*F\$*+MAVT4Ri7BUu6VE:QV4cކQ,^T[O܀?'MZ&.kwD$a46V%UzF D:$+ŤxG>`5nU 2ȜYqp2*nKXQ[.NU:I=>IG*[4EXrz6MM&(X="kGOMxp$[A >YmgG o4yY?Qj.\Њ9ޮR˚`ZsİͲf.G"Z2-FWXN\aE+YGq[!^PNHdO>I8T4]aJrh |Y )$sDH."b)b9.TN3(]Y4ĆVpcJ%Z 0vV;b ֓7!J6b@Ɉ*@a ǁ1HJR˽'086#<M4M3DD;Iܒs[S )_0>&Ue ͋q0,>jzIjEjKeK("o8#%1SkZyn28a .Gؓm9!șecJhSq4!F_q_֔T`I4EVtGz=<>P?3"'@ b+WKciy bsY2TSrF7wETFV΃^aNgPqԖt{:B <#B?9љґnI9)W̡Ψi=Op *hЅ8eS;(6z*u:i=ro(Ja߅Ŝ:jpŖG ͂DcVܖmne 68;wI9ޫ۟ϧ#$ ׃q@K:li\du5m?cSE>\M4Nx=;> 3Kh|<{J FK{=V*Ŏ-> z{!np< -a3xiVl|o>ʏxj|.Hԕ䞚jS|6NVޞ5Sk`Ҁ9R<caMtOL&b*]Br=Ri0ed|Y3/d͢|Žٻ6d}h8Ap.<ĆӇD[
uR$sbq8Uuu]U7yeY'M=y<$vX!c"d,_3Г :-wN#b%XDA4)xSTIY SJ?K{[^5@ Àם:Pg`N&;VW D#4Fc-_x_Nt-ԗ\"/E2rc U@!J""B6T:c˘ǵ2K͘($-ȳ'X1u<z&xLbJkS"I#9F_'H`gIAFIٓU8.1ݕU:E2/*ib&O\]"(i6YLN] %/J,t]x %Bf -b!S? 5ʹl<{AwK]x Q q~s>i| i|7d =&R% 8])(f%ෝ5Y5qq]x7(</C4csBK9‘s|nwpw;o.rɟso5(Д,fvmmȖ4$SIF횐;9pGǑKTeJJ!dwe7Wcs7=*a'Ňfɵ}m/M=0]rEBHܫu0p8㰫r ;HjH,H,$Ä6S=$? L9PT!#53В(gr01樀J-f)(ٿPV?U!9~(ɉ`[OyjUS Z H#JYrBT Ԋ),Yh/Å /.r^lʤk*ug??7&'Em2? E_E_}Ŵ$Da@) 4X`!*+)vX*贤/T|Mc<#A]إ1gG#c~YpW~]] ^"fj4[#,t&=:u-;N3zQ7s-^ύw 8ŰQ$LIL C]ehO?BY[ۇ][|k@4DJ`XC/l87.g"XLޥP))%@Cb0a.MSR%Jam DSqYE &PkBEȺ7xrX7xjϮ0)1#!xHެ1SȮfJ4k$'ܬI\"Nou.%]ڂ \ 0~&EuDBskîyyP0mk.fDcjmvw,q%(ո57$®$$1%K,$vq;h5ҤAN .!q<#0DmY>9̘Xz)+!>^,teBqlo&kmonoKLÎnДeîо߽JXfg }5*k[C*G ءHPîBJm4v2r6K/jD\]qaߧ5jZN=BeOs.(4F֒ k,@zxF8i 3$P0<ۦqOz7lBB[\a}k}r+m,l_KR%ײBug(t&}=M̦uNNY(Mb!Jv*qddYG˯~ њ})xOԠB6]@ (/{MIyպUC!s̮ '5W/i6jH ]1WEi@R|)P[ڪRmܹ(]MPQPAbvvG  v-B4+EsÀib!^c.fKP^?0ni,[o4N&+1F,ύz`L'#caiPAԈ쩛JKb`xf" {t?P6t6|1a"HnCρPXRxۼ QcK1c=e MjScSoACs".cTHC|'7'"'k"z%2:xig'#ޘ qS^c@?NC[y>5VQ8u7u៙ ed5kpj-ZY#VÏ9氅(5I} ,VHqo!@p( k䒮XǏy4\d V$z/kn/Ye!tmn)9F:Ve /7jCw(m  &>8(&$%"lQcAdZ75U\*=2Q]muTG@< bjBBG Cd:?J'BN 0҇Y_q=f\/!>+YF׫L>u%ĔOgv2;;u,&ߘBo,`=IcݼOW]Ih҅Nn=m[x0n^y҇gu(YpwlqYNB3;e*M:<ٟ< x۳gi[^=ϳ~{{hޓwj4]b.|CM6_7 ΨAG._VAxzvY.*3n(|Z;a"e^?ڳP򠷢ٯ7ivid4V ׋o{L^{e2[`sv-2Wd0lr6K|U&p/?g Ethu"}Nao$` =_sGٓ>?ˬ3Ow*3J⧯ό_o~ŋ~&^_Zq4w`{P6h4}QcPSeb:{Ƴ嫞O>i?OF7vH} iqL~^K+ghRٯ3O9 g;o//~'᯼_[('u}6z1f2zy՚? ɢE,Y\ ߗlϞi[&??֔e0%<؄pʄp%g(UgݖuoOW;a~ B@NٶAq}<[[v{U^+AM{dc׳X9I>1̲I.Xpt(ʳYf>GQ7G_} Tne'NLʔ HU7H.M¤/OTMTdIdM$[M$hM!cTBI0KW4Q0f臠I؃9V&WxgIư&iD$iDŋgxZWR4ՖMW2myÙo|)<-OUk_} }qZvW4KituP x"b*UfڢJd./%h:m}vh)?OϲI{2-* +KMg*o/`ZЦbJw3g܇?^^=%&? qa8q~M+0on4B!YhA>= mT3&-5Yyw>!Nɒ%уF+sPd<1HVl;}VϥNU줩cN?3 dtĖWY#Fd?@Z}IGʐ["lDʟ'_ɉ>Ƅ0YJO N/͐9zU_*}9rߡRqs'w—<22*˩0VMPfZȒO-y7B^|/ο٭ eC0!T#YSYSʝgM ҾC}B*k>ZfDf3ҰR7.'/]*&%$Iu-$p~M<;u2*yW~0Z&7d+>ܥi7X{r ědTI[" >Lnw 6s99c37F[s1ǟL˻r ?hvmô.z,46O2H]*ɻ~VBh^A040Iq@Z*-hd_y1XwR$ɜ!e _-ZYY .A%k) = ]uLhR)ڻIzl[$S(6V@ ";ޠ}1f9 6f-pô }xTJވ ڤ̵`Xn2 R)&P#C-}N9*l[ij+US0ut&" ژ ~>HR' hj ^hRkh~ 7|L[ e-fI5y5vՍo_仫7ߴItWd? v%Wlٶa:ťa uMMΜG0sH.Vu.Ec'?3spяGsGTo s{X6g1K0hgV&?5~TAy (=O{}3j g9̄D4|,lg`h9ffM3Hׄe f带":fD U \9 9yt^ 1cGK[]0جE} [lSl;'iYH#|iїtyB@A0O_yYri݊rJخVO(}w/qeYկ4vBLi: \0Ya} @iuCkp@bC9`_0ׇV*BEfe5Xg7AB d2pCWd+mpQ""^iPϰh,ZYcQt/7,) %e.*u_!Ø,Z( DVL4NH*h eiCA_ }/{ahYm  U,X 5& *aI؅R IÂ9"DU }04--JJv.xAZE,u ݄ySBQFX=!+)r}OPс LkT>mu6LZc WAZn3{B'\5&eaҦ4j,|EIZ+%s,H/Fn.-d)$+h崉zm V7 YYQG` 8z( T:%&ՄSW^H׹,ZoeahzS$5&eab4peX- 'T- ( AV'k*r0 ):x:}Q7cFYؽDh#;ƺl x:͓ 6khg3 7W[g6FX{"=@FgB#?x Zr86ff$RNܨm{֕?*[ b4ӜgGH#zi6h:m'G? 6H+]Ɂj ͐ݙzKlީ'U)n7qz[z;Ԓlw=QsK! "b[㥺042mq-A.#$kPY:nm;Z$uα<*@5v?+>nٺA`y}um訢ȼ&XڔP2]b"M-e鮑p-HrüFFv02c08C$c-J.fvZ}.옗0ÑѮN VfB<_nˊ%a`eF0@7KdL<ٯXvW"+5ܤf@g{NI6qaQ +1cPӾ߅zB&GтKvT mr 9Xƶ:pD4#/}Fn%<09>C=3rq\{N@`z0o6khsH+f-Mje{>0R!jɪޢPS7s ]LCg3Sꄘ1g Ŭ_.9tXjSoXjq%:P+3HZ :=U<˺8&Ⱥ ˽o>)n[;$v[_NoߜZj]$9VRTbMD,+Q8ww@rB^?Җғ[X119(dm*$O,YtNŞ6JBqYq׷QZ7N^T :]5F!L[ι+q>"2:|Ȕ^H)EQf %ڌT*\?j9cxﲌ]e*c Z4`nX୷2\2N_/8]BvV$\=Eg%/jf=꫟B4㗫vLǫ"?skI.]d'\u%GٜՒeCW͛6OՃy9jZ?6oEZn %/駫pϽ1H#6z{[)WwJaI/Qa~2^i1/BoWĤE=UkA*y`f#>d!rW nKNA :}v;bЇd*q4ֿ[v!ra'Hn)hPwtnG kݼDh{m)?DkO,v|nƏ&j ]F?8X~ȭ25'/lMi{Wh~\*/\<4[ۿ 1[Hv|Oo㴸.͈vnRBh (ړ%di`O8<%8tZB!EGk]=I#g.S5CݸKa"?h*r ,zS.r)(}YШQJW%Ff#k e=A4//"y~yunv'mXda DDkSor:)ea)VNfKIaV ~{ 0IYiZzoWRV7k}W/VFX!8: Jjt/9L 8m _[xB~#ʹ%n;cFo[ 7sKӫj'3on!~ڍyiãwu9Αjc}|SMN5-n$*Ì=` _ Fiq eCQl J¢Bm( zУgv'hO?} 6DUBjt"j}#(J"qS%.4m  sy!FSw{u{o!  !DwƢ k}rFnokGݥd/+Nn\ ںHZ=\߯Aj$5"KY <S;->池bEJoȄ*:CX! K\!m[yF莼U gVHXKYJpm'1]iyӒ0 cDՐubYK8ݬ=F溞>%oH"̛)tϦ,NH!P#iybe^@2wwWUn&ԭ>H_ɢٝ}UXd$"fqq"#"ψD4G0N@+j"CF&R"R c+vW VIg=RУWxH:>]!B?CB!Acuޥ1ەR]}V#uʮ ƨ{|"dy{zc(A#2RtNG)uD@8&1~n1I>غ߬C~^̊SB-WCkm"w"0fm'UC{nP_7mjƧ(MV9@O!;o/*B󺌣7{㺬pTW W |tX|'wor&"D[`95Y'&LE T47Ku/x J4w K8۸R~jK8Cz{LH"D @$ 1m+\#] %%ibLۄKفibBI&[JmxY.hD~]wg]`:14,։CI0aG SY}!|F"5K(If*5^j 8Ku$5pG=vv„kާbksyU#dT_ꂥ;8ff.ߩ/28ppj_QP\Ǚ"F/>xN##u,gDcl!ѥ ςPc8TkM: v6 _j}qƘ[m)MәMChK~W:2 E`67Jo ҫ"\#u[,kV3i[Tj JLZYZ2$M&\Jhe(s)t0&bvqZbN)v-E>WA\a4kͿ Fɸu=d nJYj @Lb7HRMQ[y1[xa4d2fY |=^az7e1ve}\u&1͕I՗`~_S0~)4}zƋp&K9dܘj$+ _QwHih[(Ne[{.Tƴ[0E[hL\~Wc¹Тv A vk/L &i`Bj*$+ "cT'|P1hQ5ӫ[ZTU!!_qi7̺a Zc*L -U !_ɔ^ȱvSi(Ne[{[Sڜv &vBBr )Asn7AewxY]o_yp#/Q;3n7f5l#Z*BnDj.-T׷erwa<@ . hJfrY9xg_sl$eĆbv; -2aP; ΅R0ln\04VǷQi-V"UZ 9) у$q8Ü&HJU2_?jKpsoIO%C8}, DI)z8*/J f f2bONG'FG*6h㱵D[Id־D>1H#1wx LV I_YU,fmo:d8pAPPDR[ogIH{;<(8NĔ4Wu۩BBrZph^đ8rЋ=~,k ]hL&o`Õ_s:A%š:` S:}v CuewKdOݛ4mڻ 17=^ =5q/R{abOཧk3fԸF ICl4a%\ָG29FI1{h*GcuY<F 353cv)kNfgOܲb_]fh6[||˚%~Ncpj1X-}*Ӏ(vB]f27d_9]Sj'Ȕ]lk)ɳLϖgz so|Klڀbɺ.5Wܷtn: M>GU܏gի0XSkSz aU#͉j(nnӨNŲ "vǕݧ_}zf/k%XDž _QQ{^g\ 54G/To 6K{2لG*;Wq>և3jl_e,㚋+th]AqݺS.X 54[Z^Rm˾ -KjFI-v[;˵m ݺ*VWke]nkYnm!RjJܢUZjmU&XY֨U_+%m-EY #M]u^LuVa\Q`c%%ف =nU YXQͱoU"cU l3GBh&֡Xc>x &c|IىNyjIALz>_b+jj!R5}hmݗ[ 3*~3[oQ5FJ'-Yrz_e6J[D"G44vbg7/NjlHBF0jMcMM!'6RF0i G18_L)y=[/bgXATF4l1}yæ^U< D"o')੨ 0Ňd6棧pq`$vI:r>?T?5 FX$&ߤF)':(J':(}E'H ,|܂0=뢼b.e730>?LǸM\z?| zekRmSp "9US,~cXdt3N{N%'95X/}LJU 7;2oazgM|`V~z rhV)0D[JHJҰYb>^Oi1d*䋪~?I`EqGiВi*cP# bC4*"dtL .I\5trz_7]5,c]!Fr$Rjfe~fy8',¾E J\{b138kYlxL_RDa)hN\"Y-FF&Ye8c+87JN#3 1&i'X8Jm,J)҄2kń68G_QCITHa\L@ܨQa!LfYKW%TJ8Dku\|D?dI)thn9'KJ! ) w@&#jWODj{Ӝ[E5س3I'k] .H @bS29r4*5l nuۋ5VwΦK|ug 9WC$&5a8>YlB\mv)pץƲyu@^Np§}>n& )KZf^Y/QiMuG2+4{q ㆽ2wpS|eC%wDH1HCilRA)@' 3_QEH1"rAο,VP.9;W-m{&h̢mOfJ~k[r2@I 2L;Fn\n%ý{$Vyv-zӱHEF%G4wIUR U8fC Fuhz}|؆;Scr }Ρe/9=F U]@7pkyxguΡΡm?!&yDl J"5 p*JZKcAسuUq*3[#̗H6;8C +T^Ő=Mqx%XկOt_D?U419sG xi`*q}(03y_XE MDwe*Ԝ@^Q_}ƦWZpWpFZƊ|R.=x+D er#<:s}w<&1/̫uTH Un cEjw͈?BPTq7L} > wEkqcu z=FRC%̷{**8NL"}1_a+0c8: T|^|O~nSڮ>0?/OSҧEޔڊ )5II6HCt< T)uhGT2H^o#)΍E[ ԜSKzJ왺}o\[u)%M]$sH>LW~ X5a-L3Ȍ3B*g/$Rm!r(g{Vzd,tת><νr$5tN,k"FaLlHl"=2Z?O0Dn2+` P 0y生,=bI% :']o~:*>FEKACe7[ T}Nn:SJPΥxjI(ʘ>A<{HH`P_N*|V堧Mjֺ 8tRQq9̋.^tpSߍ'{Ȃ>*@%C h[ 20?yD~unnmv:(%{k-N!eh/v=A˺HL_H8R$$R(dDW.EP{h?VX7wͨKvO-^jY N_e+_-w8+w[?[^+7lbՙ׫`gg~S3|keFԎl=Eݑ.f}|t#(_*Bc",zh-w؇-y/fgIݭd6#K_1W(`W7\;&CvHݾ6ADDG.GO뗟ߵ[iw<.~z WmGٿ-.t9mo } ppldatVD 1Nl~*wI=Hǭu{PX* +`@a5)}8fw֪ ňa{v q#Z*Kz̘҄bׁi8Lsz+p:NuN63KS-f78fMIFx~nD u٬=givk㞔S6|HVu5sgDg{h9ynz"[luj}=sW,uE4B9nu/dfsIq>-8!f5Au l_I?X3ӄ Of0ï&dPM"^qWMHɝ`+" GFr$-n,PỞ3lGwߞ[M=Ž_[|Q wWB^F\A}"o`4ۅ5q㜙GH*4x8,Ql$8y|ef"ۯ?g:$ǵ8mlxCB/KJi|< iŒzxb%gJ#TsMMs>y1\>G"дNh1r`9՚i"  ~TH@O<}df=cw(P!N'ߌ%.5$7ww'BԆ}4õ*KYZ2u(f茝ļmxAx9P8V#>MAIY:+,h0VbB'= YvF9} 6y?4Nš&XRA<[Rq@9WOk B3_PJDVp)*N2c,/|SYer u޼Sj1jt5*g֕L@)n(-` ]kGPWa,ɰqEAEAiwKa@S'=UV-pjYК}2s#7:" VZY6UVRo(Eѩ}8S"Jx;Y)Ԛ؞?3΅`{P~T%;hWfx]`W"R'ߔ8b֧P$YIJ%B>v ]K}K*0k)"G!#Gu<23;5kl. .}wbZQ3\ѨgnMǂRF}~ƹ1k_ #Xxs8zW_ޛl_;?  ~y>.E;kđ-fjv AEuFpq՛Npmo_ ܷ(y.LmY+~jMI?*P֊Pz-Յ_Ku(%A_/zщ"^vt{;-;~,ai G_{~@n4R}h3m?m)jHgZғ6ZkKt&sZsJǤjKSu3.Kߣv=kmqmE{vbQ8|!@\|$vpSĽ>,uNA0 D%$nt f"Jf j$d vW=JK&.mLyѡ0 s\Dkن'<܉*! C0HMHN;I t"8sOmw1YtbT),7S- WjTiLwYѤ>$:;u:D&(o ܳ 3!pmu !/$Iz}r YAђ3{)ȑ"YZhҙ= W:uAͶMYX$ Mt.Yb!No2*(KDjXGx(;Q ==/rYBKUgi 1%ulc!bX-KeD :uI4XkZLvZQ CqNJ2 @qGEGS͠D! OQQZp!% f|O>":Eo*I tթeÍ_PF \\j6W< n4iAJPGn1v]z_g @St(mo9s:FKeQrOŽYG1{FeQH S_ې 4Sl={=saR4Ki ѠV!' )%G0AC]_˄}-F#ݗyL^sJHsvܝmJ&(@h6j|@ӰؿM'j_$$h"C5 c rrLU7uNH="܂ ptX_/t@C},>wP_ 𕛉=][ҩ #hA_OټveHYFr{,q>PZ(u.M, օ4*:UGI!]fs+^gF\yHu|6eEbA#J} 00BBꛅh0> CX F(#>!WW= 01 (Hc$=QSgOO8SFctIP>^^)scjΤZ$}ʩ̲Q$9SO%j2Va] b|eG5%:lN22|.G*P}$Btm6W>S۪0By==RƿvLa#В!.jSs>oa]WD/:[se~d Y%FhF{ 98//G `rU{1]Ha`cY]A jTANV'xÕ# t7q1gy2t9 pw[eÚ15ղ0bAi#8j*3Z"Iu`:qU,Q,ٍ= 0+= hZr"¦al|,#AG.%K|<}R U@jzc-c?d̀&gu6$u/&'^/ե,+> Tq9뾐?vz*3~u~F!agM46 獍eH"ti?ma7P$6ҩU*h`aOv*a- wh(So _)ڬ@vwreЭ[NG(Lӑ }O! S W{cf5U|W" A7}8Ƙ$}5)DLo2l`E~i@㛾`dbtӜ򫾺 =.g^\Vgf+8EO5;eN.}y7bz;wn),U+PK9Udd,pT$kqMՆP`J֛c5 K ~_>1z2(&ȸ nxv4G6O׬D -JjC#:7~ v^C,ptAX]IWAgT?F- CL]g(F)"OjFl7J}au+)Q bIDr8Ds [% LW݀(f })uȈ*C j%BQ{2mHه<@r% 8cja)0bP0"@#ᣍD9]jK& ".Vd5Gؘ6 '33]j&1o5[̖Jv*J(|K>=.d qHV[7Udk<6CMㇽqRY家7  (`kE$Z8DVc(G2>IPC'|nmR*S)2 Pihrӡ䒋>M׉,< e-1DI %xoi>nn뛟n*g%˘BAeд* 01(n< y98%;UJB}Va{B10< $^% #"> 0 BB}ԟ Ng]w/ʝRIK<`%A{;b_A}7r `=*!$v=Z@pW(dt!`P}^[|ǐ1A =JSF 1?8dwdWru7ytiUsg?pO.5ހ\5ˊgJFouIqJNay8e0?Ap訛XZ 0Kv&j0ټ\ "ȅ5q3-_:'Su}~:'% `&S(]ar@[ ?ޭFB ?KJ.Ue)F^GMIVoޛ~=/ѵ&QIIj+0pzǢdb J S_Ndf1:0֛S/a-v2Rϐ `Z7F 6y:銣:nڂrPAŷع XHE 8kfz&w<=5j9G[A%& `E-vX~@6ei٩G]-K+< G ,`T ,hovk &o})bݏg7n<qg 38(DQ$5Jbψ"5 kz3As6F,1ZEL'ihC8#[]u{XAA ֡GRa<\`dt0z0L[]5k6<9NHtD0GCcby{(?v0z&;``t]9鸽&^CM3cP:;'2 fӎCen6jhœ._?.?o{J Ti s"Epd+y#\w4{;Uu% RO y\i3h%A"3Xa_-Y>"Y0&. f\ IOV&NM(Bji Ee}0`/62t~p9]몱?_V J[9.sm6?4~7O8?yS}b2-Rʍf@琧80שIXMApDp*$Lbujvv8w(Cf[ZS7oY]5<5/) 1 n5OHt|[PhhV)Eݶ+*,veeQLIͅAax^h`x%gBd3`PsdPgZK̈b)PF$Iu }@Ȋ@7T3 ;:n.3q1+~y*#4ź9ˋB6(t\v-'L2!6֧4O Np,$H2R,`UkVRhd1uz ]O*k&Ɖe[HCcI{|(@e:9n~NY>fEU~JUYqe,čzj5G7v?kWIFՅ lO?~A )N?3?ܦwwǻ;{p$Odo޿= ,&.V1|T{O4 D2*Z:ʇ6Iiu6y~$&[g9tnQX4T^xO!Ɣ]Gwc 9N0b \=?uc9) $cb&K4_Xߊ /}(+#E}6ătI?̓/7}+,eIQj}t("%,^4 >'.D(ձ;t}gB4E[/# Qo{ x򒗒XӎKw*@n$cɲRUFb-!0ta Rp('!3`:l=?Y@&&IuaԷ< /}\viwm~ kۂT`*!%tV.io*0gMVNц9)94\ޣQ~4I 5%oеnǘNsٽa=ֻZcc~^75X>7XoZKP>s@jO)5T^OtJa,V)5 N#N)A\]=;y".R{޴xUNY!4Mܿ (S2I3e_V<,+V*Χݹ6aKJӃ 2冕^ЗI#UjfInVznvwf탾5t0=^p\=U| ʬW%0ZU|p2zՃz57W(AT9rFU%XM [kpѲRM` w9(xaT1O ΥDN y:$MZDB\@7 F– H@%-rK+/6[6bMVӡN1ϗf% QS'WVJJRsR@+"i%[Gv(+|vx EFQQgKR p(ޓ)%Rި6br|[lV SPvp޿)#ұF %Ppe jN~r$mR63;bӷª6xbJlU =WMMa$vo' xk͸PZlL_(߁S 5|P8,߻CSx~I$pإ}Y~My`rxdw؞C ȚXQ@vQH25R8!6o%YZ9V@RPJ& ©S!Mr *e[5 Ð1$P򔦵)#t^ yIBBJ.F3)!CPQؠ\P`&F(ki2wNeaH^XATgLL.Ԍs3PWE 2 )t\=<6 z}EOPV1P>zŐvꑀ][~{ ve8fX~IlD'FJJԞԬu7_ΏTpSV-ah0>wugLXz־j#5FlyuqOm$9j'|w&\zp#s?6p(hvUel;%|Pޟrtb5S֮|g O%bpڝB%8RKlBL ?Ww-<ۗ.z&v:C63e%T CUbL\}!*Tcn@QAi/H*ǂ%J2 $1E b6#KHȾov\`%4ŭ OWA+,0?|t߫0??WeU}IQ.^{W}B\0C=e9raG Y2А8YI~aM~8ֳ m?^''F0%J 'SJ)fy4xC;DUɛ, S6G\xW"LtxʀYc (%RIal(b D_Q5V <v``UCI$ };LqU3@E`IE$H)S{Uc;Q('q#Jxk#%B+p1!'x'KqJՃUW+ʩj%@kBJR X4D %+0 Ba i.. X֫0;w+ 6߀Q3:).s,`gDŽM j:<["ʸ;.ݫoO=Č^" 'Zv,X9E,icN);Ӄ;vx)DՂm l~b{䴨?K"\ؾKO`F ڬ^ope|zQ:{j3GJ;ЂiwNW/^W%EݡqϖmsHM$YwJB\(ԯ]Epkc2L; "rc*Q6K+؂2\2)!Ci2  ̚]Hn4KSK.T݃ޅ+]H+POi7Z,VHvn>O(vDlژbUuưh.A𒳶1\IgSf]LyTF䇑2׹*K~S_+ ѭRFɨxiw  2*D ԃvg2Z+×*N8ma*XLEk~ۿBdN{Aw8 Q-Y~["ג4o_lHv.0#re:rVyv~>K_vƏ-& 1g;$ToE_ꝓEx>s_ޞ K<6[! S!y*d;OlB:{A)#4R`a:+ BAX`)+e94]".}3߈Ρ1= IԮ l?5^:N[.D{cQ<3 Fµ(a71'b P/'IE\g'AAȟ{JOhɆ7S#J4B#H, F;NxG9V9 D¢(8PExWR}R]7M 񎠩8%%B^b25wa7U@,õzym4"M)70RPmzVi^<:=g;߿'￿ImW?Ϯl=OʿCI/?;ll-V_H_\@ܬ15Oܲ\*}wT7<5#㓕K!hL=*`0k{`dSjIУj:lY <-| (RQԘ(n[qy%|^T$m2^o/'Z/Yfyqvr^էɃPq~w뮁 "O^i v_簼4)Y?]RE\FX~A>5wK87U$YuG>H0̶y0<E`2[mPL-S/ #`ka6LOr*H?lzKPAsZ# N))cJ$jPF'+@<=)&7WWw׳_]&PEd0<r̡̀XSy`j΄h&H t,cZoDr2 lGDLj(Dt&? y*ՠ4籈c|cy`siNȫiZ:&,WL Ĝ ]pQ= RJlDJ8.AGde;Xj̓0O~4–p 8 5;Axbc$ޛ)gނbj$&R怊BzR̀?HiL ,NJ:1}L,1͵v$$b/EtpyZ29%LYqj>,K֒A GI*$@XKEp 3#kpIAcIuh X\V+ZC?iRI)ZwY׉GvJg0w;@x}xFk%9Z">Bs[d^"yTlz.w X[cb_z(8LtS(HX'U;J;iFV=Jg #-Q(@4!T=_=PJTjb" V_UTq]PmERU×ݍgi'ŏYhw)A'&-o1L#)p Z'&8;I!4у3!ȥ>P9и,9 #N΁ƌs^[xZ}_gS 5L'<`#@BѮdBޮ,TZ 5̾V)ΪvCBWڒ~EYE,"NưkX*>R^qPO 'l}lTl Ґ#CзCsٞA'3w:]|Jց|pu g]5xww"4ˈy]Ϳ4rCy:lh<]{·Eι+CQ^yGfLlȡxotziC_TDu ,~ $ig%$h K "(WP )9VEM\o1BmWbbi8<8 \tDK`+"#d(iWO"=XC~d VzajQPo[ctFbDǐ]H|kȫ( Z k#Ba,SPiJUy'# ^Ռh5.è@ۇQ0m 0}OS܌h x8f˻I睰n@0qe1/tr؂QcHd[$4"3B2M,(;]Z*̇a򺚂k3u2$" N#+AW\:ye]a7XGp@4ŧ҈t$YN@_Z逰 d*صFuE1kH`^ 4(H0$ZK$2)V9W^q:_8 kU&`K rn$VRu꫓^=wnf <*>Sp)GH`?a{e'uI/ү~] ADL}9w'8/kǻjIz ,?~fƟa}ල%$ӓToMO&5=SI\EvBqL?ÓwǖڗY+0leQ(8 2OQR"qHcm$%IOtX,s}2u#lV aݎp1Pl32N Q{0S ~ΘSc.Wk rLi |YTɗxw5v7ۋIͯNRa`zޝ[8;9O/gtkYuw_'t|^—yBcS$t#6Ìm^%Xef%['~0ֹ5اR)ٯ5/Π*1TPQ^ 8<E!d$*5AeeJnu l`)©h r" TjB$zA”HAatA T-yxK]Kj Z Jzo)CKI (X,5#j8BFXAL&& ҔKl`cփA*TKƂ/\fꍁ횠78F.z@,o"O}+\n;q\O2 o'G|?چ "%Eږ˴p%>w?1,'f}!Gb Yj.Kdi8v6S25ڎ2c85GAFhiXοNt!]Ca Ijw: IQ}Ju@lEj&}Q(ABZߛHp|qKR50[HBȝZq'wΊ$6H20[HRK MT|͎(ݨx*K}"(vWDR6HP6QW'EЃ9)]y)G=']أSr*D>݁4z }TChFɪV^ ئF5uV#epF#T_'Rl.UwٻFn$춆q>eA&p|Ȳdf߯ؒ-b[l6bU|_m.i6wZy>dv7- uL9ʼnޛU(|~eb  d8"<{&<2r&LVgB3r]{%9;y')܉gxNG!xRȩ< onA:Ǜ͗.Hi0ƈ0t7XbTְZ,T,/S?G f,s21s`gWTowv'Ayjl<WG} \FQ#'S߻[ڿ޼D+f<s#3}o H/i̻rxew!V "]݋up iR cdrF0%4,V. : I 'ApMXs 05SLX[ F{ y{; Ȋk[U}Ccn~+C|Q u7TMoZW~P˜=ˡuh0ٶn"C$z#./݅RB]8:t׷ y)%ty:g>-ecO)$d@)+6+4xag6i,JB}'ZAaXttqqTc 4HXad*&)3@K)R[.M!ZJ<:LZy<,OL߽&QTSKt4J"PsHW#^TSH}n!EY^aZd!,SNaF9@N:ш'yT [\a02(% ƓCn1XX/Q% {%g YLv ۍd?s<[ C!cBBa0i< vxwlY~j.6nV٤2{d aAcoo|%h2()ΟKlv" _9ZA}3*τP Do8ڱF~㪿39qC^h_: VOb:5BTu׸hr-y~eɯaƏ~zhu "2/~z?ut6_ wx4p݇OqV*oz?.&x>6m[!_ a.<#Czk+T F&&:.fOo/I]{| vݎj,0OvE4G?U4n'tI\?°5sdfe,XӎG#:J;@Pj>*#EQt.BvM$:^]rљ_T|3F ExWMႝ5)*{ȥVD鮾gigPXҥm,)n-˸ 1#".sLr("D15 XAImq$[g+ }x6`8 EEUޙc̉TPb<2o]oKE%4Mf,1PWl 0.øP B1,/@=SRdZBӖY$A1QJF$국U^͒Q+ch2wڪq)]2l.p=v?x >;@kEQ*_@ *&;AJ{s $ ^kjsĬ7 ;Nsjˑʄ+" )q$švۄ1e#Cf<aA-皈8{A`1 rVHbsjP<ņq`Ă ۮWm`d( \PG%v "Qfd`7,,S*HMRt&u$jxdE}Iߊ IѠ&(~nο*π Dnj_$U3p7J p}k$t-q\~5DF)LK|]`K)Ė\EeukC RgFd$9'ϕ`y%ch4mIVM/okXgi!vnelҢ0 5P’:w@r@/@  $prcPLѦ:FkQݾQ?[A(spW[-`ha"I&!'<;QAQTLiHbk-^5@#_<*C5 1z iµvN m='Ɔ2͹N`cP! D%P'cØn[3A UyS) <H$QA7AeWL\#%)5)v. pnR Dvqr `?*AfJ)"&0^tLr(H,Ǡ >(T'(o^z¤N!prVh& 9aO]2VKIC!@2mvHfb" 8LXYHAPn[5(J"1Tӧv5l] Fj &aÐIrW4?X5(! 3[xHY#{*,(Sb$2rRF&(zG"ɢOV.N 0׌@JYT 2jhZH)[۪2Qm&޿Bh=V&}@˚s{Z,^Lno>wЁs (ƤIhj~Gkm|Eum~>8\?\YQ?b<0[?ӁYaV(hy/V9׿q/F5JƤq$5mǟΔcA8Fauv=Hd{m+ ޻ eĸ@[$2A2JdhLSЎFT"cJ֨JZK ZUECDДCVjPo llQ*J;QVq`-Ƅ&+J"axu`?,EYnȈV]:5Ci 9Y\=./|AR*Ca)r&%3!1mvZ+@0疪6"s,L<}J@"JV꜏ʟ0uvӨڙNfBbN㩩5k¸Ib";lc~crǑ`rg߶ h2@S[t@b`.$FR pcVɋP[-nOx(`ENw(Yh8mO<*@gx=mz|_ 1-N#Ef(rO9M2D5i> #;('m>#Iu淺ѤAW`%UD}7ow-vDKܕ32"O6R3,q`5Cj]#t qfV=XѩeI@#Ԓt_xx)e$Q#>h1l4)0ͭx7>dx$~P~PM@sǮ;zp]f.QW/h`g&8c,d\RmLu:vj?.dvselTmN(dr+$ K <^SMEešaeyF^e^by}]&N 9R6PzVk urGl@wR5ndNZFNG q>Kjue2'M)&c!Rǿ굂\t |5p^*>> r .Cd_f> 0%B< t7w9lHnB9A~6\"\ ܨL2Ҩ3>q~1g"1^Q|,m|?RYp㢕`'d/SVAk55u3ѣAnWY?E3|+ lu=wm}T:T/M!؏m9ɛ7wLsA$*?,~K ﴇLQ׌Axa4Y>SJeiߎ/f'0J<7FZKROAS%uo H>-Ze8foCQ3]"B&#r3+*NÔYQ@1',x&//:N3Y2r֙`PKAʙ!)ĨZiIb8.JF5G*ʋ9xl`R|Y8\T>|n*eմt: +ZQsBgtc/?r/\W\Ԉ cz'K!:~L1gvt3ce곙_M'jJj-v#ɲc>׭ۯ bjV_#54('P–谝 r4~}tW.Q5 0Ͳ](Jm (q(*l⴫(J:(u!X(JJ_b1C%N#G,F%1Ŭv@=@+kJUӢ(wҪ.%7g2+벒UKfJZ12ۥB<1j̔ck<#3+|WuOTb-HZ!8ƨVօ"7hlqq6``A<ǰ+jy?9b#6-9̺̉b@@rV[h$jf~Acq,8;ynqQh?#()Y&֘7ҋM X)B"x򀝑죗M\#U(DϘjgMӛau{ ãz,KB#q譢#$bκe+9VH?GcZ1&Pb GT)-3QQS+.=Z Eŏ73W MH( i9plD5e%c C\Lz/_xuP1Nx햃82(h :8踆U40Oc$PsßN;WaJa8̛1~zw^1?\# .r ^+zXTqdp^x\7_$ N[5Gg@+F)X0OpxEi#,MãK8Kq$hr80enE5*!u|t"v &KZ! lN&DiL|2FdQRj(g590M)>"٪%f1W? uv./X~n5}q.nZqhsPf%>obx*_]7-EdCm_7.Iۃѭ$h3P؄MGq]őz8\DC9|-onA3>߲%9;Rpsa+v>&#miE%alVՊ}S nPc1Vڨ.0j8y&'I6Z$mJ/z,E+x˅r\q{_a=M܅ 𓍐ES#IR6(\ $/'P}Jm&l{Ap9qvHSʚH] 0b1J>LĨ(H('./B1w1&K-S6AN6I!] ɉ[A*^[MRmqgF`I"E7Uu__$_ymE2Qi]F)UK`#$ ˦(Du&MmG'Z -SrZOZםY.4Tlk~Kp_M ]ٳ %tf-DŽ5m&6)٠gLjjKҕ/ㄣP)z.'$Â.f..'?MO)ʞ*doak-Mq׺ MC\8y_}B%_kyYKQh,^[i`GWy&P^C+OnӉͤ43ؒl`/g3/Pd &-ްb^F^ŗɝ0 v~phtVn^Nj82>P>,|Lֻ۫{ 1a mq]Ê^["ݜ 5bK/5Ȓy159^Q4ƛ /GuQ}|u(+Swo5L`ܧwb!rncݳfh#g؏nB,X|p2go3R#3 oa>_Ln5;CPe#x\6fFFq#h_y&p/n5]K$%WP0rNG\/)ni_ݍHw.͗hq\~ ME|_o[Zc_bEjs9lo_~޿VgߎVq3{x2`T'NۙHV_lg%*z&T[\⵽ndX6Ik>ZIg\QuJ~H^|뒹o!H&= =() ;G$Ԭylռ%5ny5YDVs'Tj9O~ER^a}ho,^m>:6?߉G ?|#%.#VF(nȰ & o09K1bdHʉgkgIc4ӏU=)U:K:HGn!gwalT:|u '. ߂K؝J jt`Z++ 7#bݭccU#-XYaPڹP)+%C꣼4DWWH" yPV@0kcڹ&+T%C1Ya`t0Zא7Zڣ[T=o]>]]DR}{ h^*Jk"$ґ|!Bx ZX!r i&f%['e"#*RK;Y>:{x7rLڙ gy_Y=`CCvUCne"У7SJ'RT?C~plCQ$z2HvD$ @UH xJ:\A%_p!= Kړ"CW.%jU`]4x\>k _!C JSC Ћ"YkCa5zC:X @ ;6J KoD˜zlQ-w?:=aw;7[Z6K(X)^͉Ua5ɅgG#s9 hnݪ@UUB H`@.(7#rk]!D! ijK0C64CĐ$Fs5C5A m!{Z x~Ț#Bg"rzПV \kXrit L )u3Z7/q.v36_gr/ f,|Bרe$odQDo|1/q'b}k# z\zx?~plr9>2 c6 @jկ]bH¾8/Җb1qKPq_~?~`m/o* _qz"Őaf^19\bƥ3y,DH褼'w}ߠDq[(2~>>o/۷z{;)nxk 8?+й몀 B;B hD `_k6`ƘI*he J䐭j\D1帯M;-RaW(yt-~vZ"iF Q+R +"DQV/r(e" ^ijƊ`B "t0N#ǔbJ[vF'.F,=ׁ)Y"B (KX(ŃlՆ 9Y0s\1 8p6xRD P]A"ޣD{2DžS8B`R:E7a]ASF SVi!!gDI9g W\zducT)=%=E X#q{Pz~9b(AM3Q (8@AƁì)SF/ن{O ~7KWB"پg_qN!|j_[ߓ3pEEsx\un}ڪ]k'{BITKv+Պ׏q ,Wy Gof2]'}}y}X?/I6|2/Xn1DJ4K11 г'P9R M{)Q޻ּtetr%wk%yѿk$R[xB@X`aDpfX w٥wl'p8%ݓ#{D>2^h"CxKu1v;TЁi4H jbQ:I6X`jbAثEnrgkկ/ł^Xsia6c0 ƠA#G,Pb g` d"~^Xx(:XxP^X^X"jbث%ү/Zj&@-B*UrŚl$:Hglwvߧ3Hr4=?S4uf<`4:[ZvpQ2Av>+/Prߡr;v2fu k>y|l>-=utYI~ CчCoex)fߧuɃJЩVmw2]G"rw֭zUB9DkaJȃ׭[]TN6n?2*Z&!Z SܚuDFdZƚmlmdJ ߝV;C^wk!0U;Թu&=VUSm1^\̺կhݚ@3h-L=6ߺu |<mlc ;n*Z&!Z SX|-uDV.@nXwxnX+Z&!Z Swaкuu`{nuyP:uƺ d*ά[J֭ TSںQM`^h%nkF5A!rl덱H^mR>'ǶfLmme>}jU[5 RckKhkښ5A"Lnc=jgHaւ jB9,LIhں&5cn2wYks!,1;֩.msm M]Y$HcnsMjA]YYcns̍j}{1KismQMX]YPcns̍jris@Ǘc6嘩~mYM(~49f9nsmQMЏcfd+u13Ja星E ,/'ش96܄&hŎ/̱m17 `t|9fnsmQM`_星E /|ms"5ǗcX6昛896ܨ&PI/,8i?osjǗc 6E%"194ADJ+#"r7$x/Oٍ<%`uMI?BZ=꿅#IvrETŌz jD3n,1 ,xf/gQ/I# aLr8wyh$Ʊ0 QDwrI  dD iD 0[lc&Vڤ& KѲFl,"ǔDh 02qbi#1Hq& wXGHIH0>2[`.,#&q(C Xh☃B[\pG(0FX% L[Ab0uYX:hYgdLP"v.%O$ +F`]F# EE.4>~&6rT0,ܢ&8Āsõ0*%5Zqtև]ːE&:t)h\A\@Re33߳?\:Hn*!?Ù?ɭPK\adN4+Ň௙5v' ƽI:lqGc̤r:vkƑ4a"(ljqm!vѩ g*vO.z%kw=nC;HpׄKܝpeoX+[. T\:?{`(_`\Z)I4f؎$gZ]5+v@ݩ$)ڰWҍgbJ>0 3Pg[ YH1w%=4 K)FLJOueQSѻ}W?#p灁iN^O}oFw$I]wqiOjگɍF@^HȊ8ZE13p 5(2 y__tv2!H`\fzd3SGv A:pi~aIa4~ӫό m65Fk+&OקXu&\qkE u#fGPzy,sXXf8"*ITY)%.6"pEi-%- 0 `Ҽ1lhg VQ~VTEWiQN,m\bR\c3$B$vis#3jOtlDoA*Ւrڨ9CEO(wgCC4¿!|$.1'I@4HbNKS| SUY[/P4 MXHJ9F&a;j%/71W~=tE4V^4=2^ʺ A_)DB')$ ``,B$F" % 4N0! P͆T A[45&h AUA GŔ69 ||(s>+u:2WNSCA: 'R}`]̵;o $,uxMfigb^evOԪwavUaI|G șl$ulꇽ\Zޞ`2ۂ@]޽6vhY5H+Sq0!P&IHj|)M6oB[XwLgy,|X* !&]LkA r͠(Nsb׼V)K U'zi›k8T%WiBM 8Μn4tť$\qM8pdV,3xJQqЮ<<tKD-.0x6})|#DRf C.8. 1(fȔ0Z"_q|, L:ad0\WIl:%~DY"Ra 5X {> F|^~ b+ۋ{|ʭn=45\U GHE?YU٫FsAܛ7]qUЛ+bN1_00u":[2K>WZA*%D ShC| fCWI*N8Q ZNy FSTJ569IVzh{6XZU4/17lB_Ϩb6<'OocKR|#߃&u82G[;' [!SU$SNjtiE agLU"f }NmuX^-d=] L@+ t`RYO&Y4zA!Mvsϙjkjk?idsq7ɱJ}s?$6!*!hGe= "~G=(Nqǝ{ :/OM#+FetH<0q_RXӹ9nYVxB#*ugVX79WLU&*gɳ.9-Fვ!`(N8LŠ ʨ` J¤有C;3qhḧ8=bpp _躖IC1|$ U)ٖd%H8(Qkc"F=,R xZ)Q:e, NF}lLv ٬'o/ -V%I`%T}Dfdjvg>Vz-RK'bof5wɿ6F@b̌D.hkF!EɉXrK4fI?Ӯ}.J"ksY=羈ì\%0"aOҲ^ 74~xj[9uZ`oxSSzY. ͋7oO4rҽV?)(W7KMm)@"з?> B1lXj_߬ӚY4єwjQ8yzrTն`Im6gxtj8]o\ I-I!LTl3!jAi1Gn'ϊP ݃Ql4Ұ ]:j <`ك.QQ/:*tN*P[ Z:ӛ+liYd6"L*z^k:6ky57R9hԿb`F(ԵG43jq]=A ' 嘜jX׽hv <hަ, _Mq2Lxu6dtF*z-=etB9vi*<*u2)P٤P}҆B]NjjY|M_ADՆ W{̌.]g~6B7[9ۚhes>L[]oe}[+6Uung:ZR,V;EU_o`Y(Ϯ9=XYj%YҬJiV4J{J2鼺n?/us-8?Hu49j#O]d0O 㗁a-FلCoCN.S<;+B`}> Hde.F!)"5D3TǕYX#(\d:$RISт-D,l2ZKCg_:4Aq&ņ|X&$OfAI YG<`$Ɏ(dqk`A]ۗz~a[DžȗFYUľ+!Yor=4OE2ć3{6>ktzW1S,&E22J X7(,F>ۍ;mx]ِϮAF޸ S>U`X&[RIHXAR[7_I4 5瑘Zҍv ̇ ,?mN?fKbٯP5_d#5O217T4G1GJILD*QɱcSdIF3 Sjn| մ|6~) <64jW^_ '=6)WQ/xi {uh, v}Go>^=I9{CtQwy#)ڃŠhW@GWohc{ WGs}̛M urx.9Owf穖szoߝHvY'7 zk-r~ÈۓZ,}b_;TȄyhSg:owLl]zeWg5ExvSIMZ{~Qz`>}\3Y,8yzy'n-.J}srv=/ ηݚ--5z*nY!UA,:#P+3@wtײ0Ow{A@noȠݑ1Bs˄>|d:ubg,-ŒCՂ}Kn*rqV8\u-`b+T)ze!N\,;l3LIQ& xbs *N{6͕+ކ9u_2T2_wYvxS7,(zwW>X$sK^:eT;?~&=h\sH%'_KgwsJNM-ȻFkRTKXOO.[Z0)R[G6nkyw>^3 ^e;4SN0n'x$#s:Ips ouԏs9t>F i7VnkӡnU;cdN@dM$ #dO0˶zYGY䠪ErĚ(Lwd \aMVzfBXsA)/g1/Zke hA%'响)zmg+Pм)[Ec@/Uh{l0tX㡋)o{' 3>Rc V#U eΆDȆdފL>@ ?M7tkn^#IsZd30ź bbbI$(B2 Kg#[ L0曏TI>ٕAU^Gf^ f̪dt -Ze}P 0r1Pt1tֺe|fy^aT+6H;Q'${!|b%AJ$lEVȜ]A)rƗ(( w3bwyTvYO*[ްV&MW.%K^>0heL<}FU~RۼmJ+v{VnFwnKV܂AA[=lvX/dSeL #bMD&\#\MQ|̊Xv=|x_0 g#}#`ͨް5pٟ~pw#Aj1% p_Ľ0Y\轼GuS"//m%޳0  z=e **w2dxeOߟ:=y4o?>`m@mF`g.u寶%o4A$5A&ȉt@ Lt>}ɻc0>$;Z8F2Nʃ%~:ʧ|,rbG3giI)ho0]ǤZ8rY7e5ФHd43DTee^Wus.O{"+}\|7?MƩġ|AUH9,fI5b(+%=wђKIIMTH(Ɏ>EHvvі(5yKo(8R̈́hHȥRD-(bC@H<މTW4YRUhhc;h R_QϬ^T/MdG3Bi}-16 i_/n糯ˇFP.7H/(T0VA`aSJ--lI2:S)սD#iFp11DLGf%i4D* Uŧ,)rVg&.X KOy Mc 3~i.kQ~ߐČ4ȃmHtA7HEd٣,k!0N5ѦTsN7-iDp(Z0&0@6Z0AWEWP٘54v8:;Xuo#Haoifk4)nwȥ ^^!<)?6{zQx7?Wec[۲Gҗ4}u\ǶeLFwWl@Gr;/!rfĪ|b4''3I^K{ I=s^K82mljy@| v"DV+rg7m.z+e«o9} \ה^W;^ά̮BG2ud$:^y;&<obE_1|B*? wU7qpn$eSr$+;Q..n}^ f1MW ^9OO䊕 c# Etg oUm ~pÇs\57|!xjylܟw7͗T\~q/fm7h =9NrӈAq)^"`{]}]2!Gp8b;az_%A& ' Ve7#NSk.))Q^(p()K qRH`Fp/JkjM&^0 ?r7wgAt3do3V.%E?ܨݧċ8DSHlk+\ \Y1"pRBrwRpokBvpBhM|a7W*.ӝapʱ BcR9Wo]$cu9KK&صG<!_8.o_xOG6Xk꘱!lh=\rQ/LBO\>*!I_m,w(:ȻxgnqzԟSSBbpɎ[ΞK)JT?W:R#%/4) 5'/YgqmpVKS* 7֊y<Ǩ%CY0xRp/s'J̝|g0G tk-Q,IHuƪP2x?gDot@pl}nQI>!y8(kPDֵ4Dm^JqÊ 6+1%c[b g L'vu:'wu6MQD!jM5(8+ Ӫp\%a SjVRe kYp!K˞V>FD->J&KlwkpN9׋zw yCTRTQ:j[S;^r\Τ/o\T'oa1)j,~98Y-q- ܼ[pI")ΖHiݒ|;OV+8Jf7cD?hvqk3#Գ[W sGwo6|nx'"P9њi"/^kN|+W9=g?RQY {njQ2XY=}!ֽZf_a? -׽Zؾuy K xPxA2Bh,bXQ/ø?5Ά< d_.c"&Q!@ ђђM5~eSжCY {=Usw[ s;7qπ.6ibUvJ;j91=^{f'u |ǩ9G 8xW7;*^Y"N;Wzr'2!UN R ŻSy")i$1^ N ZB$ViY eWRVGmTK,Pp^ZR9K92WT2[(LM1iKr~ ut\ZVkEPU1I0Ӊq~GakVbX21:.* Ao U"eUb2Rb~nҒTu0ňD^WTfmCePb&h.1ݛ[ȍArOVLv;qފveߎЂ#TJ% O'A 75"q/+ES^1"Vx8ׁi_fe &D^[YH&J0NZYS0$ FVV8cHa(C EI!^B(˷ i% R#* )Ҝnkj\+J!5p< B K*"iE-nVAaR3T)V6TMФW(&?J@\:WYLuYDU"=ܖͮ-U>Mb ?BeEgn1!w?]\\F:GF?ύ|>MWFM04g]3H9D3djo-}*`xppJrǂH ?]]P E3&/adQcnS0|h$KIEC |<"d: DpHC IʩRXzo0^Q f(JlNBzneY4/D#HkBPV:⹲ƣ>w"EN18!*Wq7ڟŇmKO{)̤%a3@ xG"A 2H30f(=Y﮲/ʙm*ZU1z Bi׽J$-×NEfx_e:gʔ- lO2q!Dx*x>8ּ)=P0!z<oԺ7#OZD .j.9;ss1{~rjq/gQ.Ϣ\E0Ƀh- /ꭱ\Wh,_:cQ:Y%ȝ(_G HLv Y/!Occ8ESùÉ\M(WR@y`D@"Aj}L 1DX4zz@ W(\ tl(jw(BG 7= MAT$9Š9 1 d2p'H b}8< j1T+<xGԮnj|-־+]} v:u`!`H'$,NHla¢ zܕ'r@S%2xo i\(A;C ǪD4j};QF8>rr-fv>fxl»iA秊̷{S"brY%[wtBQ}!sbK+xCLaq{ ׷Xx%C)wکnUs>_M~}ՒkIw[uvtVmўoۇl r-{p6P*B;>ɶOo,[ݒT5j-Uo[{ݵIrŷet G55CsxEZ#]%fNUE,='eQYw'xf\Ͽpgw}73qke,ɛc"srXyӚ$nT@_"{6UXfBzʇ0rHBq͒ZTͰnNmMziѷvOx8M)@%i, 9̟ o$jes v0u$Ӫu$eOZ[QҤ"La\ՇwӍp?%.Q-vn}6}zm~I.w%XƩ"QYIڭZQ CupTb}FryRx+!`2 1jMQ$qJ$ 1)hE# ,2 rЙ6\j9p>/!77GY*UA\yG d|җT)B5߿#neS7`~[ ~z#0 @5;>,!5ooQ1Fq$U/۵zz9.Q'V6Ga`")D/qq9CSnX8C2wc} aBn ns6+"n)y煫> /s'%ixoR*C$_;S?嗴0LJH _˶dJ0۲#{j Bt?T3]?wA NyQ0uc Xdzp^=ǥ6x苍&O^"' y+0ݬ`lBĶЩyD!Run.JuXL3M!U׹;&Jt 3}$:bϦ}dlsTDDQ"S$ean"q <#LBR0;}.1\9"U<cky$ؓrܗ\֌X;Xf_qqA?ϳJ؊|9ZJ/g&t/`6?ὅm9z a +[65w1,Qpyn{Y+rnGf ҧb1 a^̯Jր1!z' Ȥ'%S)P_f)a5LalQMAzuӻE8 X!>JTV|i/kI=M*q}ToSZRդv˨ 9?ǃ òi8gb^`4u NpǗ!^(aBڧ)Elj" (A4Ae))CrB.!{3J%V.wn ܁0JH%1poͣ6|m- n$EQa#>2U,F"s` wTWQ[u5 e/՜T5Ǚ~clySK^0[sj`yVTK[t oXBŘ EyIM\B 뮗MVũ)^Ҝ/Smvq_1J Fm_1NVV\֏]#&BuvSW Ċ΃bBD,T4:%6*vdM[/h[/rv-OiAڸ^T c gEH4Ċ.%VvlPn5Ǎ@-Y]Z]D.t#¯ݧ k>mݱJQvԖe dcmܨutl^g 8YџSzWgvScXUv~ꍞn}~&ri*܅/<%wOOBJE# p78x ԼP-X.,GQ.Z\9ז#i.ۅ9s$ gHa%,osQqUcd+ppňP]5|"^b5|U`,A'߲;\7*LD^n4+' k]+{SBc uRs$g;b5Z!iTsHuѬ#ۇ0 E|.f5 2mq:ual #ݦoB{3nܔ)?Ou l8 bB0d%&JH`H N '4ASTkhv1/n<3 )v[k2(u1+0NQfc,lSWN2&;wC"uhɥR,]<oށz8wzR,IJ,`͕+5SiJ5U]1بGqx8!o).Q)).R T$?TFQْ~sqh'ͧbtY?]<㖖X4 $Bkehe1bi xGZ"Sƍ*.'H6!d</Hs9̱` ,bhaZ jB,d Y !0H\݃pvy}6&FLHMnǓ"OW맻 !,5+/8Su8T1h,_ RNOyb<N AjDH́'^*>K']3(:.V cI \\dXE:w{SRu1"KI3.Lt6&=z[RsI(Z?f Q-8 T Zs`֭Y.6(56$<1y| BƏO_6 Ҝs7vٹ%ok9wcʳvS*ETW]9hKJehr Ɲ; OV7pGJ%$)74^eKq nOߺavr8Ɠi.»w6Ώi$ nIn82.U? ,hpghq7Bz&If7!$IP!&2+1Lrϔ;~+=}LE-3]ΏaI(Yi|ڒ` ϧo)*3C6Lˤmc^>ht/>Z 8Sك_ћ ށ ށ7x#ed"bbmQM'c,a"QQΙKjfQ]Rv~rʰ|V&E` ĮZdQ8휀iPY0k PMKӈ%[>+U Gޓq,W}'R}aI8#FB_cDl+VI{CRMHLOuUu]UE46EZh^bg@瑑ΊD*mF,]6VB Gdf78އ0z!:P%U[,M%Frf<+,U͒IykgkeE$\ "BOA~ O%Z+%X`Q/HB- yPJOx}"7Qn .a:rXHrBjbJt;λl(|Emw s~=U0pcPt@PRoo.DYhB]`P "G GVV'^XXFZHTTI{ޗh}uTbMfQعrFSNKWnz@9LҾGdy hi}u;h̎_N!o ߿=<|{7A7ŋӉ wϮ?Ufp7__Ο=;勗Ϗ/~9 ǧN0oO2>?_qy/8鏗ὗ/^>~:~<OyyHҡ̷8!%NegV~~{Cwbs0+R{ ް?n qڧ9P {xޏtN'o:p 6{ֹf= #-Q_tM*xr=pߓZ!.9=vbp[_r-mw~pr|1bEO<˗?Lh㧱X%EfZ$~_ưd呿4:1/8^u,6\aLNCSnk76a`!dffF/8gi숿`Ŭ,>|1÷LAǾ9(p=ϣ7N c{K8#H xю >R(u`IbIBcB*13 ~`FTkw~fkAr0Zϭ6\_$-ٽ$A>'IPp4Y{q͛7㈯ߥ@D%@Ԓ@T%$jL} j} Duwe(,U=!}w\2dhŰڻh!".Eۻh{m]E[9Eۻh{mẖ!MbB&iM!3.W$אsE+C yQ&z]fEs?soޠ>r|c#XbLdưfɳg-. F[D TamHbILpN8 Nt.H4"A. H/$5yvkj*Q=BĚҼ@Rh>wּ孯%<֣Տ;iMLn,l]O|2bbE|FN'?`"+uko/;WL?9 &|#x83Ʋ"eҒ 8=ݸzox0TYc`fVC#9)JDQ1g⬖dEELh!\ZR.S ̐ J$EDluVvMޫ1ɦ;o3G8tɻ_ѱxE/VArxݘiqiVx 7%15d1n@1f]6f t̒lɊ^o1/cB u:h j|[^EgSG>0":LЀt1la>uiGGO^|B:鶤&tL0"%;WZR ɇŴX]ӹn_(^ F+ʖBwb ٥( I4~1f&$ fW1]בv6!9) []a"dgҭDJMpw΢x?|~Q\2=J8(:uQEݣƱftgҭDJMpw΢kX _*Pf*7yZBk,.sLR9eL ÈAc` ``3P%pDaHYRK"$<:ʑ'XpO,Ehr hS]hD,EkZy;a]јe!Ph%^QDm8 e9e9h cng1Qɸ)ʦR٢s-R"| Z:0R Ai"VR)p)n1(dy/F[2c&JaFqrA1qX,]`.P+K/ ȋz?,_MMkZH+EKRA/Z%!h5B+-܎B_+3"-/& pM%ɟ"p٘5f"M}eS{ث/fܟVny/Kk$m4uQ %品9fc42 0k6z6!'%Kj2Wͤj&p}Er4NV{[mA-I;0tk=w '<KFЯLQ\ 9A[^ f鸗Nv%ҌUG/t; ^~b!7Ld}0eo?%ܖ .IcۗȅZ1m4?VN\AIxK3L[nDžW톕?4d\epe\e0 :&ycxl JJuB,5,HF!gkZ\HaL›/^E8!VHw _ĿPL//pG<[R*;QaWLfYoI $ki Ke޵5K6U!Kvl֌7d3$/ub3#K2I3_Ûs$%Qth ת0L +V5JE Q4F!4[VT lʹ a{3mkďiaC> Zxjln5UCT#Av|<.&/FG=XTJ%Zwyhi.;z82&YHؽ8˷A:cv=-5zL0;?lsAgdu;2owyv?ڵRnb%&%GpqN?_pً)s`.f/ 8OcLκ*E|fgeDSlUM>ŧy9o46#MPEczG|WsE zB=~r7t $Nnr )o+7gCSy?W5&N7Z$Frl>P#-{ܼ-Nb_|.ψpiS>~ z,IqZKt!h(m׭0M=?m֩TڛwtsBnp,K-9jIv25 iʠ!sBUfde*kKW!xqTd3ķyuk\݀;n$e._ZHXD]n ~hԵ+y y?ݽ/C>8&0Ta|73P+ @%}OcF tlc,\Y-se$0gKc&"&+_⭪,Yz DX9L`eb:᥊"PISǓM_pivj/R侇PK6l.d)5?7짹dc[;/ŭ5ukg6*乘EC,r&냤A2QB_UX,EDr97Rd[GR@iA(MpN)60- eGN1nܮp̋uuƅ?Lb#nezq W!4k[qWytIk܎R,L7'%6ųx9V߲E!&Ѩ+q1i񮷱*B!n>w e;J!6ɦM"J8bڌFج|-ѧ xVcUl($8aPv :2wN äBM(ۨ[{r'g)i͞fT됶9l='4xtEt17"a,O 24$GZjR8=>&9IhN\˥&JsրMyoD&kd32jgmR&'i Po7pd:E&YLߴ4(W<,˥U7]d g%+s@&'|&OIt*pjTŐ#GR@y`XGHQFp{”̩ۆmsdژ^SBzrlC!4nX4P>+d׋Oaq}jt˝cvMzx(yy|%!$37 (Ϭ34qbiBVi9 :b$='AØK3!3WbTbÙťUUkz#`"e}I I1q=Pc3}SGwqq {!8u9>5">9.5BqLrsDCaY(iY, KCN !`%8VrI9Qȼ2x[v2M`TJզԮ*$20\.:AdE}~9wκ4gmB`[mm'1sЛwUJBha]OpF2bæJx.cQ!/ ӌvzL|]n|)<]b%Xl-Ãpϯؐ@ydz:+Z_nUH;t焖;z~2z"*ħ*5 MD SHݣ߶tjBAJC`u{ON*b[OeM' _Im.d›a6ym}fӫ$ U^xk3fyd[N"FKFaDԹ-:8>X `em,lv,vm_j*eFYX|J!ɏmi@f|h>lWM}BjcT%+m=~$z~z I3IaYfY9*孄B) t%Ka3 fxhv[`&|g"j"cFB2`شQ734m`̭ϰX6G#@'r/Y+| iҖqneЁKNg=mvjF:r#1vA=yĢB i"Z++X%"BыOȔH 2"p}`zF +Xr5UXDw `ј4*N 8dxކN WYAѵf̦}`3 )]S!&/Bf bZrI!+g #+?$$eO M>cO%JL .y7I{PHnpԄ)tr(Qʔ$bpµiD)UB$$0h¹dL`ލ7`KX}M#%w[1[ySUx'Y+\凗׏//_|eװ7(wկ~j/>O0r

^_]z>.[?FfN\8s饫߯5E;pn Nk|*Dzdz^ Ooj>o`4LRkq՛gJfioj g^FSy>c~wsҩFKߥ[ՕX\?7Z@MJs,-+灔|Sȧmu)i#&ۚؠTi 2C[AF ow<8e-E g;&:4Z.&m~Ûf|Ca^~*?]> '?O𯝩S1 }5n`TZO `8^9J}3tG=ҷҨr:(.Zx@b](ǾL6Ԁv\ޥ70h*;6Tn8a•иg0XL;_F2m1E<7A^- _ܣBJ͔7I/ PeAp s6'8F.`$[C9\Ab?3#oґ%Aus+[_E& [7]:l ]Vrx~N9'O0Rm!ږ>5e`kjl;UaxZI/щSDGkD2Ӳ݇@.ALgG hœ3ǫ烈{ėO*NJU/?Kx7i%i##˧Gv2z>M[3_;eR\'`Wd_WFux  KwݠAAP ` Al*y# T 91 "rIO Ygu&<^=xMeʇkgշהfq>')4P,R.`$&^|xÌ2U'zfMЧ_urhb=Tr_<֬*GfaHZɪY%ֈEr8 [Dш4^%VO TUt |.{IMh[4]xV~㙙% (xz[24Fv|cc V3BQJBTpzKX BxPgq0 {=]e/ 4L쉜Y.7y'Ae5Z4\V܆sM\Mfvie%L^e߇KN],%s0n\ Ĥiϻ2K/g=_=wW,rV;g.o@8$#'y t\FiƋf~~^'m%^~yqr 1{7/zߵ~qL;XwM/%]T7֠i!5LUG5ydK'`Uh1KSw-7cTzp mir"%Ym>f(BݵO!Pl;[-*j$Th>xmJ[)́8H猄 .i'<CLJ&%TH6cȣY20)T^H%xVjH2׏ w"i/kפVe?K-%7JX/>jr'Ҋd 4߹Q&s 'nV-$`xezЩn%KtT'(VBDՐ+B\V<3A8,V=88%bq2UtJ.z4}IFO$J`У.ni ynUROҵ]U׍p&,h%5iiOrkmz'O٧U2 +S}\_p`HS0͇ JUӿ~ IAXmusukʷtxg:teiܞVxvJ>u#98HgtM9_ eI&1nĉk{6vOH5Dj/Z}^p>?չO͟C&Am~1^f{Mem63׶.Xȭsu$<5")86W~oC2el%~f2uJ6[J 7QX*\4( ʥ݌7lq!Xs,Y |*rʧ 6AKnjPwz;}w]m%eNpE\meSfOw7TC^D]TcG R #X-xqQpTE`Cw:l{Ze2ryOw7gPŅ Lrf9\;LsuBƶB4%)o:ڇAD)X+cp){Yf kOXFJ6Zy>Ha? 4~@&lzD(Ĝq-YzU@inez*Lzڿd_.~ 1<]-&Y"zꇋ$#W}﫟L.5 \Nm dyxC+68 iR"it+ΤoEDk$"Fh CfI=gPDŽ56 {kbU:$QƁ^AWWvCq0hՇV)siLZ^NZCfu*!#F c56 9g+ ͖:ռʖPwj[_Bb5w];X$8 ~]zK;MyE,ڶh9HNʈ|^cQ $dz5 +s,Qd'ַsސO`םhW5 ~n|T$8 -xQ<..CDFJ -Hxϖv,)'-9~XٲN$]`LjȨLZaFz63G46YB/,)YCN@DN(4:éBmdsv(Ykwt{gm ͂3plK޽2)zM+> ,W<ԛ '% l/D i>*؝lc֗뗘ʓN pƂ'xWV2_Y]hd(3t&P~I0%eˆ&yiZ@џWF|]d!2LȪ|+ K,[ }r#o8d%FfɎHxZ'ťQj+T H֋]dU;^T-AN[=3&؀ * l(+e) 3j@Uu\3ABځ<xV?^);F=kf|e;]$sqxSI˧Gv2zE#[vyw)G/t&V>j PhQ >죮f@n0$ $!ޯ;n3pjjրT9!FWaGuHq[`qo"NF_N8Da%|z%ѲD.蒌 $@fpx)3}[2ݫ@jŠ:|71c$ J5(8zgԣѠ}G$Zn8Ӱ!jb")I)=q#zp *wrw5x@#_w vU {ރ30B^֏~zN>N( 0o>]hϦp{9loOXG< I)yynr'ebzzxwq>_r*%o@ 1>l toȠ;VI?[D"e^qz7tHWRƧc5㒉3Q vLσ2GDr U, `?P-酢[3ٙIUmlMVA?pގǛЬBR_.~gDQڔK_>ϋ[x{ql }2OCb{D^W}njďkW8Է/'XK0@a5Z9#cJi7 M) fJ9 ZdLm%-nlTZяby.GMt2l?ah3HWʛ~"?<>/ )&G7WUyKlv|%%S8$>R?r(ϛF C?pb4r8:!jzwގKpWQ}mw-Iyݍu}ܓ}9bL~igTIse!gz{ysÙoԅַ)%W4ȉ(@ &F(`Bo؈H\]{7$]SȌ<emb3aώ_hԑeQb&-{Y$W3u,Hb2+##r'V~%*x$duU>>eshm;\6r%2kCr87Ю/*Ҝ1V`w6&5~'MVF#-zg3Jm4]X+ʙ&沴ndPM 鯏j 5H?lAjKWDEPl3 5 ` BZA=[{R {啀D4xٕVBS6QhmW*򴃆)p= wIsY2v&fYHe$3Q_b,ΐ) D_/1dsd}^vnL%Y^eh`89Eۂ ٞ5 t,J{;ڗuRDw(sP'iz^α7/\q!ӽ;H G@}W{wamVl^"AMƄ _iM9')9:;[e4lƘ pbEP_a!v 7?{WcQn&Ytkg}*((QkimYSg;:!ϟ:Vu"t mk*]j?RQzh-,;aک9T-8K8c~pDRYdbJrfD9 Yc[T&oXsX{h-1h^[{O^}Tuɧ/rG$HVT LX&d8q9dHSWeʐVi% iE k4$㑸ojWHϯ"̧ ki[ʍ[-<ɱ%j89?p͛;UYf`aeLhmmOuWG(`Ca K7zDob [ٓQfc +5z\xA]/$iP?jS 'XSF׫|m8FK)UA3Xld<$N[ d\|NNHHg6YPI䜓Bha0Ϣ=kc(\sB@8tJV̊^uwz(~нK)hb*d؎Xmޫ;i8=6b%Yў e0~,L':}>*ZL ˙dQ"#1Nմ-$d=MsAgҤq=jd0ei͟TFI铇4qXsgй4N0S @ (s \@2}|[>~Kom/YKI5ع?/f!+|صJ] ij zNZ&zmoybmbv񾲷klGd|}{x~\uabMlޭnJ;zH|3EMWZлf3Ǹx]׊ExTLv~M|n'^6HZ&1#]`5[-& /R:/Gh?FBm.@* 拫! 9zK۔I 1n""ZfҮH3UasQw ,ƛKA>Cb$bWw%˳|$򮆟n@A QG1A(wG|5J"X1)A_^Kr3=-M2{5:ޯ@א?%j7/A_> 67#6 tgZ:,x Ǒ$&)91ڳy%$dwx[#avp8tR8`87~vʂQn{0jjGGx&6< *w(Z \!=D\1c x52S7'X/7^4oʢYnN*߀m!CI*@*+$SjteCvugHtKAiB!Ai.q)]8 X*H^fslDsT&ў5^VGK Yn[-`rkiT9f [G$aw 7vZ|v:Wֽy A||`884I%^^6Vmo&hx8& GdUYCUD* 24UUZ*9hgeyc ք4 u(EUĠL}T5F?ueK05 i:^gX3\e[svkݸlgmkT: %Đ' a-H|qPO~=H$a.8YZ!Y֖b5sv5^rV h9Pt2(Y2SK9~fu:~p|d.UCKT"YiC`HRLA/6I`Ԓ n#?Wd ;{ UXb[˫I'zӗH(s@H]wx9ڝ9[\Q`"wZrr,zIW%[kI^H$/wn}:)&G ?֨8g&4E>>=b\޶[PUwn] GO~->C?oIxDObyXzK\^义Q@vN_Z C:uZ$eO3Piu?Z-SeWov,'r|So/GpڌňﯮgIr\LPz\{ss@͘S :3MYfV]sAX ?#  ?>lc!1cIdEw 4 £/(M? [-]r5 dn]5Eю6d fLEs7JWy]_& =ZjV!?UX WXwv抏FbC>%^w &dBv}ǻ$Hc?^dG1ƚ4?yr7:>m(TD f}fVYA/#} [  N y1+z)ɿҴܖ*N }/;~Kaٻ޸n$W 0@IE> fe[ijtEvǎO_Bt9q}AY@iցܧT:X"hv>!5m>kr I1RmeBg3o1Yn@U2 t ",/0ߖ?xO&^}EQ_?/_ yˏ-0dA݊dOD&g GCrxfq4㗉.zvL?̐ h0#㍯MRL+|?oqhYnIW׿' 叻?yflߵ*nNWjm(ç+'[y7_T[DŢh5VO\uc. k}M>_edy9 kq5c.HGeu>a Vg~,^ ՅMõi сHhPO0xyasOM:6L(fM r%4F;ָo߷֮*~hq̔!3a(ZqQ\QEg~Cqp-0$gmv3eڕ&uSi;9!&|iS"'euǐq0Vԓ*D''T;ջH(d!hQ;A 'Ϻ6ɠRѠuY c>P;5Ȯo|{2RG|Adje8teS抔o}ai4o:*彁)[Baa=$|, PC@NO`zu2#v {)$aFkl=EۗH5me7Z`t<psnhG=dXш`v2"di<-/XY˫a=d\݈ ^n[C>#LdO'AUXx!Cɍᩯ}擫H:t\NSZWG$^ԍTWWF|tT{zרaR*=]WWuWӓSq4։Zk36E;\{(\dCa8$=;)zztby)HYػ |z2^<RYX\(gAj3n]:b[bTl?gzLy"0**7>*8/UG6"5+.YilR 5 P(.$كg>qQͰ @Rd.0Z!շ(t&A 䛨p*^&e#UGmq4l嘞ܝ&O;ZHc''3Adp F`)e|''Blr&$ɣBTP9[_*{(%X有H{URSCB) )"K,zR=lC:ӏO籠खaUU..>/)dv-|-5l:e1~1]Jr[9p),ig='X?3!-~ ߩ]h"U"(CRTrVw*콢-JpPQelyʆZ'&vGQvNbW̕%s5u7j,=ZNpeVV>}ulƃIh J$K{hZn'z#j=?\_+ & 5yL̺} 8^t$ /rlC„4pUMc_4V=qޡ"h8 E/#hOON*XycJG;=-٤fh(E܋'ˋ%qȾ;1gj-H;t.8?4za"+Q;X=r.AYj/.o%cbLMw>/K &l0>T}쿞hO,Gn?.PXNמ|]a},o-ݱxw۷R_%g.kϛwg_݅gϟX}bu?MnAu5=@7w^ϋ-ȯn|md_#S@шdYfD9yR Fn,7[G^Mt{ɔXQٛvcwi]5`VP@\Z.Y6: B1˿~odT2vڒn<)cmݕ#.4\v_aՊ^YSOѦقC&緗u@n(m2h k'J=FNBcEt#\ SêX69W3nxrN9D(B BYb MDJ!gꇜ1gt"[A'%s"t6 5.;=3ޗ-F6 ;_DΚEݥKZbNta \J6'd҉TgM'rx-aR)Jgd>; 4 D3Ϟ~X1DVKЎEyZPk8abF3P&{[%kt:'Lja- p_ݑUưȾ3y]ҽ׵6NF9+J9_ٹRRNv\PϭM+ԳN[pc"~X'>ߟ>zuOe叻٭ӵDŽg^rίSÏ4_ݙ%~x-ӟE2ÿuBO}l I2ؖ܋!p?z:Zc`ۋʾ;o$x7_-?X&/Wqq.#=ƋkΗ)Bޗ[-k^E|s}fgm[Λo*T ~&^gW_L.^OfTc=;=FǑm޵uR֍ƭan.eR"H. EnfY|~rA ql{`يuy W'd%sr[8 I΍[:;\OlͷtYnZE> LB+*DW&yҜE͹/Njr|=8|-Ej@.uc1U4m!pj?%Q$(X ;,UWG15> yfµgWgrԫG 41kY"8EpU2dhlq X'sͶ m* ;ɝHT! -;H-K׋:q\IS~U&Jp=u6-xu]8mv x`ﲓ2:.[[k۫5*2X+[ EGKr58MϮ4*J.mQrx)Fn3@᤻BtLFC!U8Oue~<>ɍld?yIlS˽8>@ߌ~LQtGigȳrj1꒿OΕ/fad(fp衔x XNb炐yT 1 _:rթHZ2RJsI9dP7JZOIwjYTIɩF&P0BUԺ9OgtDZfTy[54:"ZgQHfMܸHvݘWDIdO$3.u;O$0& #U5!u*)JvR2/;v>uR1'C ɉ Gu7}Ջp'ق'XD30"%4aOHnN8z2ֱ:x ii4p(1Djևqh1@do]iQX=>W 9dөXe ,Җz+ڃӲv(S|da(pq=5WE~8K^U1# >_/d#3r\pȐRDGXW j#0+ 믁DaL p'vh]~0߿/GA$|6GW, N噣3&4zeD2q/|QяQkQJ3@nax >4`Fyg#dpo2uUU ~}}¡yWZ?N<4caerg>/n}uߜƅ#vp: "|ONjK/#j(B ZqJD/'2QVQtT9HWR6٭ 0DH)"akZg ah0P:k98mCkr] E=j {%rdbAι5#~~zw6Yw~N= :oʉI(tF*BHJn*r1 of2]r>7qۏ+)ǻU_Bi/N'=a·5,|/b"*拲b^. KACeun]daAb,W':WϏ _?_f ZtKJ^ݎapk bo {?w]饿MG^BpQ-|ZXʘH*)-U]UWԧz-D׸24q*[꽖s*ZK' -Dnqਸ̃{ת B#T iAM8H= sQH$<06ACX+n!HQ>ApKw #DEtTL/$}k&{s<%^Jʮ_zU{/'~1s$u&| pO?l{=qa ;p+6׹L09H^D{BQ ?C'w6,$l+Cg&o˅waZ- x+֭YԜ_A)EX܊n.f엋yV(mw?n9z[Q H~QaA~޽Yofn񫟂D]ބgЬl'+ ~Yn+f[([-ySP~z?ﮮMEf1KD.J^Abg;cøiP8^?@()4w~ȶ祉 m<<-D<_@0\  R4v)ZS/2OBŮ)?_](0͕TTZUO h;W%r5%~K`˂Vl DrlOcK@o Ж'mQ$GK8en/e-Kf.S޽L> @[vMKhQ|ll\tQݯ2z'gfoݐe* ͖%mb+dݭK\x˯O>Lo7N1ӻd=n16vme/aJ?{۶m /Ey? C8$K a[Y2$E!iIsT"hbYkϞ=5`dT!I-KYBI92n^ wLȸ)MrT~wʅ[{/FR^NLPi);3U.om ē\e\%}N/b &Q[ 9sn ;W ^ogתwk)qZ^+tH(+ {^:ۮ"o?YbWYrg]f t<5JKvC2(m竟nPQxeNeVT0.`dK̑Me6PmbYmr9~EjNt(uqڹh[+(B _m+$|˵p՘Q*hWMҜ-LZyslpV#צ/? 0}h?}ۧW/d]MIr@sm0um~;إB t]{sr&o :NZ(~Y?dQ&7:QQ WEIwW۽Z?TpLEVߑTozЊ87nX^Z~ NƟF$BAIm\b9& z^Tm`t̔Wόl}fܜ2j$j" BzytϳCy 2rM.̗'+~GhrmiY,ILT.nxĻ(@'N>2_=T#{_twL&wn<wQ'*] }* "Z6XP(]r2AݏTJ|b9=@܅|.S SCD"0$SJiK/dRY^Opn 3'ZCgMW ^/~L!b昇rN@JR$DL*51k1ێ1+BlO A, zHhN5cOğVfrfZyu+Lk.vp_QfZ`@_"OO)fI4`>VOպh5@[=Ѵ"@RDM:'3]Ȅ \wn=X,uČ;ϳYLCf)AN`+ZAz4_͉Mh'eUw*9L{AG]\{uSC>͇58im =-nKuO;afyZM03R!6 Lji>=s*[{_M197,)_w;dM B+\APSBv?#]jƶO'o1xcxMoнj8e4 T<)25u)#LB(O1MP4H_L}3/_Q^tUׂ.Br`c:_M ?ڕ!wɃMl:_|#(E0c&kifBj}5 nŋArZ .g翇mt^}nZՅZEهPZV^!EɻO#H Wq&<@;`D'QUF 8"\ANA!Ia!9J1U +!D ;DaCbG(A`{RcYjpj?la[#!2E ,<;S xKzAHWMo*{ӭnrbs{][|Lw(Ͷb͹p ?7wOYh\UI#.depw %P(V}D@uȅ$FTr3H}7 /g q ۯm1k>vc~ɩO9xdi<ҭp~ߛKt<߫\5ns%udwvT52R1HNO|4; Exji.5 )&nni7?\R8ht1^a9Q7]Xc[J 7Y.Gm6@;0 /z@]ꈤq7R¹Q:'T&sA"9H!@bz }^j"4iDP LQPL0 MRIx(E( `J @}jw H-b *QBQ'8 "aBT5*Q,`FX&01D p KAɥ-2qKۂ 0m>Yt3-(c@pѢp"mA*+m}Zuzr!|lwBboplF1*!h-Vxm%_so- mĖöQ;~O{rF "1ia+礇Ʊ3itMIo:f`1Q$g=vx#cZp#+`+Z?;ZH Z!1Œ?8npQB+Wh۝޲FvŐ{08l߇*HO@|`T~cY} y1U뼴{ЛنMYK; \z|\<|]濩쿂}.NGM~<S p2F<=[w͖dd-Iky2zg:HK%Zr:@ӡH8Kk9z~w1Q:rv^0 a્-{a< UJ}Uz^Ta)ǦZf˓C$/-}r5;f0c)i`i`~  L}sPpx(5vgPf5Zʪj3^sJJ Ш\iCQ-wz\xP4]D*b!QmmY-*vebE2\g ̜ykʴm&^kOs{A}_@ܻ^6YxN@'4IW7(Np[{ЌW;H<oZux;A##^QhL: yOg7~tW%仱k7G̪2m;\YSW6߷v\EwSGUtW҉ОF.O`39W҉<̓ѦyuϵJǏnAߵAONsS11HДJc"&9Ca4 #E1 X) u\AGհ|w/Fg6Vߨj3^v BmLe p<zjQY#,1ReQaWwAXN_q.n%Vwah_)Dןө|]Jmާe䖛 ]/OVjJaxE$^YvTvg[c(ϧ,8ҝX'_+m\#]xY.r‹Iib抁븖& 2N-knYF.<wQܶ/?^iqgoia%G]ok=.u@tiNW]I7?]n6_wj 2JkaJ|@*\ǥ;Qݫc LITdK(/ZWN$^z0wK eu¦.s(%P6MbZ~Zߦ&[#?l-/z1zff]%s )!@$<0S )Ϭ\U|)@OpO`V-6k G9B>O(\7Fp=d%W'&&.A}vK&!x|yt dǠO@f-g oR: $D&Zwbsͱ^YH4ti 6AӶֻkf 60-V%ir h*I̴ф:u*w@:O֧0S}_FEFJ(Ȳ B1P0S94ҮĒ Lۉ6iWG:^m*pYy+R'sSS1M4oM=*_#bTP$hD1'2qʣX*qJ Eb,Ch‚_L}3/_Qw˴ׂ.BrgZ? Wuv8~+c 4!) |]Y60HFJ( ycQ!O$<"Bo '@["0oQfL?$fL$Q c %aQJiX0A+REUfߊ؇hqv<)d\ӷ\2*5WOϯۧ +ެU[Coû{@lr=~z3Qud||X,A3γ?ٟgf~3K16M@0"KVwCdcí,g?XǗ_ Uh? LPr&+ %h & #)1|*TH$2 EBD0HRX @ "cJ%IU")bH ?HRnXԄkRTw$ sBE#tP2k[Viڦ4EkY@`iU疩{ҹBԖ-/zݬ | Ru{ԍi$͇YCPC IMsi,,H"&r$G纥tT޵6rcٿ"`[2@63AzU,my%әUL,HV\V{%y PjѪVf>0F`&ݐbݭOw汵E`T4[ޟA=8DB'aq5ua6~@c<]E!#'~Ũ1?֌>=AKn1b%fG`] 췻M "򷇥~UKҰB:9Nۖa7Ww q_.o,9bpO~1ctJZF)v\0,W8*W 0(]L&i||~p6y#)VuYHztu+ϩmb}nr7mC(m!׃ψ@F3}4N@fA:ݟćt./ H N4#xM4c`#L,B0EP@$RHf*\%!Ucbe]Kwz86]qPVݶP~Էz-u~h $B)[zkRKK9zW!4ִfa( HJ!K*TMj?ˁ$bMG9 FB^*;UcKʈbdXT ;(2v҂Һ 0$ gwRcHU3eWo*uo -]w8t<8so-w3L:[~ {.Mm]UӼK4]lg ~_\BaA;ZɝB^4l`9El/%lyoAz¦N<&юf;=u+lkYn۔u Ѹc-Ғa>?a|Á>?]&Yb .K; & )BJWRaAEʇ|!K ݒw WF2-4>@6ZXY!Tm^o¶MáAVTA.3Og۽ǸP$W8gzyo6k`F~}>,={X?m+)|  Gm˳W, DT2ż~~k錢óCǪBi]Z8J~f F .O{kr}/'[ljnPhטJpa8cPF(!b$q~Q|5 +P}L}("D6_ 5J@y.%̒v2@O1Ba>3_t3J8y/%f{j-{Iu" >%8ƫ[G ,f^ppfx@d>'Hw^LQWSU4,.=|(b8ZqRi?;\8ss- !FGƟwKFȑq1$h@,<U?^ Kw^:7 NfaG½9z*&ǻhC~;9 Ǹ5[gA֠C%pF0;dzzc YXv%}"_"d ԙK fw7I=͂;t`"Ip=V;2A+Aк߲ڈ3"k#ڧ{tj#:0J`Ls"1C.8G^}ʣ3s*]`_u3svcةv oupaNl-/ s~ '1SyI{KF -*/ +/岑MۑMrAG #/0?RvㆌI`؍FvYMۉ*?XHaL6 ׺k>}[UGI;#Dwb-]deA iPqOvS]FGeձj7ImaN*NQ=gX{zוe㬊ӽfX\۽U C@ ٥\V1wrҴ-19voRoв⭡eYQCxϝe@C XV%npjAN@8IC @0۸e* Z|Wi_7]|#OӃ-+,fݞˆɃB{BkQA˪BPj@iQouIvo\կZELN/3-V6'|OZ.WwC.>+S.pҫ:%B¤D XkB 5(Vӥ;ܖҎ *K0!uLM;R O#?FCDӫ- >$]Cft#^c+H'oJ>)ƣ(d,"#Y - R:Zzډq jac/mo߭C M6ߏۯbN:oɷ$eQxzн%?_g7OZUo]R^\Qߺi *Z$EYcS`Łwwy7<#%V>myNoH|NoH`$VW1*W# 襆L۽yC!NmrU>)gSąr88\R,IpȪRi`gap% Ȓ +C J3*/]EA5`)>i=|#UpF!E A/AKFM0BY%.טnyYI/rq:x[ɃiO;GY<-w4Mhnҩ0aۀ Ul{@ 2(qt?rWA G HCeJIȿ0nm_XnG??7n[/3ܲ];^5?]X"!3P[23J8Å%dߥjeaǑk;PTP'O lD۠.(đahpK8U(𕦓i9ب PBx-j(n&\j^s FLyq>+lJ7I.&abRgYY.?:'4X4WCrQMU׵ "TvC4PT`Re J"U %>u\eѮyqsm]bS#ф5t#Q !¼>T?~\t]qDWs@8gY=;e}uq#PHCyA 7EkX@Hi`ETŁw6yB[B^ں' ?7Al|˥^؞2/cΣؙޯH}e$ثUM{ZjL]x t&_\ً;"NG H>ys)G u ǟ $I—@A1Yu0(Pg0 j3KgWM  ĈiR -}!yUUJjkc_USTȘ(SV1R\e dj5_ R!:R)>w@D>'9U٬ X%U4ğcHyݩV3 gJ_Q̼2ULMT:)Kiq'R( Vҕ-8| hNc;Rp@bBr=g"#Esa-eyU`?٬#lsvC @LX(S$!,e1Nhݐi&B8 !@cӔJ*pOTubx3ŏɗT2$Y6}Y/KEn .])ҫe_E_ Ȭx;N?i_u]"0.?o:/ a1蕄̀BguͿ[ΙyZbQ7Mg=T \{pA@a`L ܧ@Aq]Ⱦz97Ň~]Ck~ktn2Fw{ 9תC_# CO%.AXfHݦp)RbUNdIV?&NҔ9vBqqz1`Y ]{Z{,Y7흦 ѓZ7Koh[BP Q bUQkllPrVeȈ =># k۬o`|̛}u)Ď`rԟpA \ຄ}' ԣ˰p}DZP .bk oGx{Arkg)K/7vKFݪ4-?KCFi ?˩M-rx^m=9cCh ԛ~Qxv:prxlr3认8br48&ݬBD[PP&F1(:kR9`zܲ|uE\-&h6~4"-#;&Fh/7w]J2".!lJ-0^ %9R}HE9;EաwyI*ɪE7n`q̥$HY2?]ZD7p5P=ld=~El:0~MD>))!?M,g̳>؟d'S }7::M:mZL}mXSoMe2ߟ͊ f1_:y̷̎ogЄ& RrcPVep??t8XudF4l^t0yh٢e,R/nmu޵mډn"iy,G9,Y"B$8 %ΧČpj~ۘYl@v-|r5ݬ /Z[aE^ƴctd/Q:1R>YЃeBfX=܉ڼVdG:7B(n5MjϚDK`lǛ!/{o+/z9U&|lOD=)-k.Qө\\vM` @EڪJ}.tVҲ ๼}8*8eP[p쇚BZD:`R C[b;ϼ`]\m@I.˦Jq{CuqCX!kfx{-,F FHGn@vypA #ū}ĖҢ,ag!jrί>=V}DZ DBxD3{ -7 z=ďpN|j%xVїxV֯>$:^(glEZ-QŇ9*=#HjXUgyp^_܊d3xvRS08g;&D;Q5e+6d CrG: F7Vϵ$.V6bZO]lQޞcC.Fc?ضNP~3J ,#@cSJ/7B3ں{cH\|V] w.1bB ^k,W5p$`6NН #>wyA%=~^BZ* iViO]T<{<[W @*Y|=d١lS99n$վT91n[2M2Wli|_ݴe9i&Z* ^j22jR217#NA/lFv"Rދ|!0Ns ĻUof.7:,^7@f -41^Y  .Tny^ T\(8H,i$#_<ke@ς_% cľ.A`0Kp^MW܏ {ڛ*8f7⪩ۿzdz9oڪ 9ӥ͉҃c9ҟ7w㯇J<N102Q<RarbKs4@"E_?V ̊GfG?ó.V?t6_b2 0* .T3Qw;OK~\<>}i2Cj"<|'#)E3>뇮pP01֌,TLU킔.Wq7Mpr >S Rqkf୘~Ht\"f0q>> |$׌Orj+Yyo7uӧGRt=]_]9P_ÇᶔBrD0`=uo,YoԣB&3NI:l+oqkg`Ph6'(r=:?ϯ#|gF|)bL  @S7$D8)<ɸ"D#g G4Ezǂkv:Ch>~L7닮@7=/F-+ަ̋wv/?IK:nYюBHGp=b⾲[IH s;mFGHrIVr+U8P5 AKMnS|:så9D48ee2 Ua48-iF?!LL#Kp Jր\*W⮨aSeӼtGKĹvND>qJ=ҊlA;NoP J^|rn~%mmP.2\"yT*պF|޴*"z3z1UZ;%z -׈CSnB2V݅li 9#; #RyG9(rùT^]]E0#;[G^ֱR:4ؒxj[G Ýj>[jSߌN500xby-ŇH*+.CWâ{Оud)=6PHv\(,뤌ܭt@bЫ!q{]KNX<)}"ows^Kߞ cCh :Vx{2s1  "{=εrǣ ϬWL0hZPWpر$;rt5CЛG@QwNP%(׹*Htkzw^p*=f=}m)GLҀ/0́HHHRΣDXir71ۼ.Kn[7~\(DBq<1Dž *8ƬѲseBWLGPPqwsKq`,BV`OqLS3M4ŻjS8H8"II0[ouKV.q =@J}$a`ejr\RD `IYzl7,キ f%Xb(Hz[02# 'fUN};ΥrNr;LD't(@4~}$n+ 1i'< .l\Y^9*ra/{9ԃɼ2=0fzzgy1QKyEqy)6vU-2q0U}٭ZlãAOI^] Ѭ#*"d 9T9E5עY٠ؼ#kp\h:&"OS.L2l"p]is㶖+,}y&}q?vRI,_rqmȒGKw e%ꎛppN/׆1Vc9:b޵zg|8 F˹u={skV9wmsy1+0ʙҽ\R-N,D\@.OFpN% }]:eTemv"qBZM!p{8ؾ||{f NZ>,ޕŒT{W*U ~[Y0e@d{QݿNq1idniFmk>B7$679\~4)DOKj]DSKmT(] 7̭ͨ~?m>DIfrr\\UAaW WB*/h2 "~ ^ߑјh [6sYb 0&_L?y7~̪,]5׃gV}ҎCdWd$Uwgx~(+|}ݼ:X_;3/4Ү|>>;bCN+?n|br<@_wU8/Y-}fd7R9 <<^G0"|K&|iW GPlz-UGuVZYifjH^k[.BGT펧+@" eqm&#]Giu7p!GMz:`NX?\ٹպN|]:!X,ZEEݍV`(gK{l[?=gĽI%}i&N,^˝^] 41[˪`AAڵU97(fCƝBZV:LI8^)ͯ(}G5.lZ6=YFjT*;BZalb{o@KVy<=-5 ҆ڎTZAɠ(>NXׇ|14'u]2EH;!9 "$i`(՛7Wli^#) ̇lᘣ7o?%h'O1A/=)fhos~|XѴ'ipM>96'oл0Ԗ!W)wyO1{39f<<7Tl!7rRƍz|Z|&77An,Xh1'LJ,!sfiRau&{IQzS u^A4̧d?r'x~d`@>D| >`3OQ) pomCIq9}?^͍bIΉr@wG<-쐊}<`R7R8UhRo:s_uAԗj|Z׳)/|B~-YE&_8fO~ ,>:*])L IV"j ?9$J&xcd+&/˧O6h!36+l! \ЏN y@C~m<G }^1c<쁜BO_s sm~O\0@-ImԢF%HQuɝ0 $kNȅᮊ!&(bHTHR~1@4;fyuqZteŚ=jmV )luc˓Af棦J?#] }e~6;cPwƠA jM< 'T?FB䐧+FT!EyJ %i"ӈ"]_M}n3Wlm"M>б"iWUpy@ tlԮ+ T5m}Xd #͏?A3)!ޠ3noiAoY8U).>~?VMyU`r++ac8wt{u5BkC4dX%)0biPDAL41@Qj?ohѻB,9 bIX1%P\cyG+QJNG '@ITy̓4 Q(JpXJSe82r"82pԞih?Ę0WR. FN7$D1I$!#'cnq7Gu Fp"~*ПV?#vlύKm:N./k7ּ:^E@QA!"\B^?zȳ܁Np.0,LsT0^J`ns™q{BRv#uu@t"8ꨞk4`*P0"D%(Dy"b%ѮY$#"'n\2JFډ`xD%2a* ID)aBD(18$5pĈTqԨe ӝQd_ڃʿJ$ YIփՏe~](+9ߠ+Tv'=>{bCMM1пz7lXQqu}W3pf4쟙Pq9M-$W5q)DrDyCw`Rd j"I6ZіM+Y noE*"V)@qDBm͈iO9=^z "Ф$_Z%`juϓǎZY>UfzNɉ78=&$t:^}O*~lvjK/>8'fŬPϋdo p|/G _{%㥾`oҥ.*?³ay=kۀh;ΆѮ~'7Zܘčpb.\\vtXX{2; (mv}%̧kx;^SANB[v(K`iw;"^: Z/p_ⳁlX&% Dz e9`2MQ'.1\۠^Ku+W@ -t,<ݙYxh6Avmr0V +~`\X E{+?[kҝ 7@(}lk욭neҿ;)s| z$N8ۋ5*nHѲLN6mIQTc$=ZgUvꛟh7Pe}<uʐd()=Q#ؾ4Z lo՟ډF&&% ϊ M57c4>!{^ PTkVշVyAzQFVsQ|ǛT+\Bfkl`؆_Zi*%Su堊SQfZTw~v*^2 EHXj#I)yIg&=3SlvZE&]϶,-cգ-id!@(L4 B}g HU?ʪ JODAD@t=YD|-ѭde eXfrn#z~RRU.5NTfx1L~]̵CRC"wt?̐%2^_7_ڛbY7~Zt q揅̿Rt'4[x_27E{Tԙi :z}[c|t?y߼-}a[DX[$Z!@WEiP!hDžߺٗ" YaՊ8Dvq[gAV Ι,ӱWsLw4y>?,9c0pK;ϻ*NN^~=.S*XwuYj=yi&ntz A#Hj쬧9z0{ÉL4Qu7.Dcs<]`2]\K6 6E{q.?F^rS+d s4_tfܬ9fdw1)xϯ8euzpdH4ezaP9ϛrMV*żn5Znu1i:uQŻ_3[K梅yզ @x7Q'[] lNoTn 4OYz7-̛h6RSҏbQ[Kی.5/<omw6K-%>:ri R tt GG}tTt#A>:#GGG}t8!DV_l~1Ml6Gw?F\|ҏybXhHC5TDPc.!6ǰDslTSI"Ys e̖S*NZzdCnطbh繿'SaO.y%fͫHNA[zՕD*C>*fX7i縶{ HgS,~6 S;q2Q$-b+g?=#LI=1g*dӒ'Rd9LzO8A/X1 P_ܪOޠ"t=y L jLC\  [xaT CG %yU\ qԈLt|o6*MzO~۫FcnD^9WڕOv?EW~鉋;~>2$MW}o煓 XvrKc.]@>b-SǠRe$u:sU V*`KSn/  v1xΌb J m}AhP0F|B 6!6vhY x~J̞Q`ӑ dM|Pӿn_F Y 45ոGZ8t5T[2??;>L3>Wo1tʘ$ 9@ǔ ARq"AhThQ$#@" > Wƨd;I ~*Xi@6ԍLcJg6yFIܠ1P}zrm~9.oNQ ,+r>8ߍUhٻ6n$rRh:]UҦ.v%q>y˅CzI%^Ɛ/ ."3ݿn4FC~pSNnzafu;_|/Χ>Oc<g/ZE<1r;f,f1*M*3pZ E`$YJ-)Q=2A畕B!dtJzR P(G<5^*l!T^FlSJpTrQ_@Q'SRV{#AP̚HkdvLJSjw|Vmzu?N덑ůû{]Lz|QyP~s>%.n՟ ·ߞ=y8pqB!Mڿns?ǻ;> [ _К'@0*M\w7)N_T^*DW4`n,Zo:gS9f8{ v0F:L(N3!K"ՇV#J; `3YЩ|R_ p#.3У^iA%[_vς񫏛c2wD h-&+`B#gBаר5(E%pXAAv==I#6[i4eRnԾYv-;h& i1Q)F6>D[)Z> iHQ<ֺ9G$uܵsmU;{FօI"M? WNRR?'RWa?g߇4%9P#heCH]:|:cЙlqxTC|[/ @h/disiڬ8{`Bh]NyI.u"kX]2(t3iMLF_Ur%s BdB ZYV'߫+0Icݚ$lx7߆OռBˬ2*c]$'uW "k;g0 % !JbAD*@%_Ee#a;$iH=QpG+?#wsԳLM[u зAJKpDj"3Rf3r4I]/z v>6bl2eaP&[թU9 ? )0;b׿Χףkp@-6g@* mak : 1N5ȣ %]NͺZ+!*pzݥ?Y[L+gevI.Ry $èSq BU*{a$jq V lm|ӫSڮqJ$ XۇOz"r) .5Uڊ2%4*ou]:[QE^as뎤&AW%!v-! P/  ɹuiQP(qGסC<G7%<#D3r8?=C vq2^5%'KU[ ??d]NDQ흏PG"5w7u=Drfګ.T0ITs` e #BibbΓxgK˹B0CxTDQ"q _'P v`j@DsdKסj&IY7\ .tiT[i=f1 4*ʹQS+## @B%օ֜=;?hT^?;?8 MO WC̾:=xy-O?ۜLJ VT=g\PuJ(Ϙ҆h@2ʣ,U.l;lyZ6Nʞk?A ԁC\U4[cu0ȡ1N 6DF2& 4,H3Q=_!2JKhx"׌b ׫"՚46X-L# %3_&J8p" H)8yQ<Ӟ((LYpN~V Gb ޗgxC Ynx&G {<U0@6ScT; CԊ1i \!ew p-U`[(\% $p(ɧ"rZ LXJ,&9գYSy~'1+p? +ɫ< HFZ!83puj'ϔ#Ꞡ ŭzZNeAje {MLzwDE'ҹ4|1]BueTcW )@nBj#\5xơ((1x Gb`0LQ#o"RRKgQB@ ^ ʛR3dqL+S-uۥ6αIfyf sҶ{^ł!P{Z5hHwNToޮU+_7j;A P)]4'ulJ&ˤD0\Gn9 _)_< D7O%O:鑩t/vMk3w ۮ" ,x9W.,$kC9 6AP[5 T*\O!ʉj951x9Yrp[5[| -.֠S¬~5̡9owS2[OD8 '!xG |k:CrG:k8>3}< mK Vr޹WDg+Q )0߂Z+v.E0=µ5F>Ly:Jq=tː-.C?}lҍ٫UpLsmى_5@ YN*5x݈jҮڰ!S_6%kZn4~?F uqՙ z-R?MfQIL& ;zM:酝Vz~W;<ϓܲfό ȽϭAqLF^9vH3#]L*Uk?UIVZLR+Ù^ gր\jC+;;v_5Pд, "y5 GWjHlYM+KZQroBDk0_XN+~kiJ \1[ؐZ7.1 l~ ZZWJ3Sz)lv}'o;90H/Ά.H |?M#5N΋2cb ص>cBybM-FベL5̝Qɧ?Y$eY1q?SXF)c~d-98eL1YSjf4rkP>$[{ggf=Uw?OqȓwNh 8ZH,qC)6hx@ L vنȝ~uV|=3@,6@Md+y+\>$$_W@CT'0LU:#4ys;[t>N{ʹY9> VЎn>1 |O[2zһDa3d/V+r҈Ue=o'@J_&)5GIyޯK)Vn yBi<4BO_knђG%1 "Ţ0WF4PPJTFCqȦ`olj`$Ky]i-e1/ &AsK V5T@ B{xqsH5уgͩDlh&-p=nz%Q=hEsO )P8 w`@zEV(qc*9GXBHlKT\EJPYZfR-{&k.йc]^9ӑb>_*:nwbX+^-MEc5zˁ̀F[:H\hao{j)7m>7%z?݆缇I~}0Ѩ}w[ K"s-BA )MX!#.ŲUMm esE-dJbJ3,9ETkA(m)i8iUA_RaʳPJ]՘}ύbLzk9D)ywYH\,K㗴+ߍ'iĊ5R'tE8 ]  ) eIH*;m"G8+Xk'b}%WrD+.).#RWV Jq}]!SKtK28DnI6Yy0weQ =\v}xX]dj`d؟@}9",RB57sOtIQ?bQ.?,Dh&pn9-h] #wA޵6%ȗ^9E=@w~Ag'$ǼTq2vT.7qYEHQ$EL̂^p?zY6Ӣ- F z/1vzFd .BɅKKQw&ZyL"3ҿ*a lk5FUZ[\kc1р \Goc^`֢,D 5JijHV`4gĈW r6DXd ) tR¦֘XVی,X CD}R VC`,^ޫ<Ӽ=F(+kU}Ujq~#LJACcpǎW軋%c`ƛnẹ8ۻ)cyhKN"]}*98#PJd-qi߿҇VdF1E-Z+2H \ŗQ Ia ҍs6P$\jFX$T+){a-tja/*&YY13&$mem,P*$B;9%S9kgxvAZ数b[$:U N<#¬`w7ь2CcbM.|C*\ZbozA}o/n N}|mG1B/9z ix!#'YߺIٕr"|JAaƘxy0^AjcC(A^ߘNw83դoQ?ۛyhx~!vx^o{Rx |bACqCl\oq@ZꤵA,3j4|cA/B {~T|9jw!&nQ;QqTI"MtIـe*H|Y)Ft;\ZgĐy]m&d ˋd reȕf%\&pAXlH`dHrX.?k/Q&X^ΒAAչ85X[x3 N68x N7#>ߙ2Af %f[ϐHYl9AhfSګ+ZH;o6 !ujSD/ڤpj'TTTkt9H>J)VnHEC.դYJVZ$HrOg68H^Ki_kR&vBJCTkAJZJS 8턔֕xZKR5+9ɂJ13']Щt?\~I0\;P<9\(an.jJFePMZ5T^"M"D5o ԼsB1 9b*Yšg&*rK~*'~:mx;&W+ZNl1l:n! Ć7+#zVד_ځxPZο.^xpGQiݏW^ޟ~s@ߘm!>wwbģcW!9l PIe)LwM˒BgT+CFi)IUa; kSk WP2_ j'\Hڀ BҐcy̙g y7jvkT\x% /[җ=qdr$lbRFu6^IVh+e4JDj J>[\dFJ G AWF1&hqDmWb'|ohu4P~ӿ|,hރJLuvZg= UL.-7Ƽ9o]`<ɚɏL_}RWeßL=;4 "wZ2Og&&Z_k,b5RZcS|{RWTs(ϺR NJA;mJ2O_QM RRj*L-g~_^zL \~r]r{Wpf?2T?=Z(7/|6!ϘIAi Rg>-V*γYkqR1,"\}>1kM|m#PCΤI(г(IqO$: 4P0dg}0 Hs^a Ͳgn*aI֏i)0ğK~(P&J+#HٮC)ѠHL2"K_IH !1w 7Fln0ۄäc',Y0H4 o֡$'.仇AdZE*rME ׅɞ0c7dԊ@J%w@ʔ̠!鄰ZD9O|͙̏Hxcc xC6XTΉ뢮Ty.Q\q 0,\}`s 00 uQS\0,3 QE6FoO 0YƴېZ dQS- A/tO2CQi x>Iy؆ |dЖү^)8^=k2lt[P+€0y8Y*7?ׇ* ;(tNm( od+AV01݌56Y!T?}*Lf\S]5FJ耹1-d6ԠGwEo5xS '|>EbtrԵ:T (1xy Q=?2 $?:&[gD61,ٺ3;Ќ&yͺ36wIe,Kt%RBl]kChy+Z <RO~-Y<4rUl3 JxQZ [α~(/f|f%Grs hPPAq$Xݍ9x< E?|ǻDw jEW4K$A?cg'[5ZrӰ7ɉ<>؛LCP!6R: -xa 7@>)u5;fv`XMPJK1jĘzpL"8Y4XSvj"$oS&dsa&7Ye!K)S&6;MVӣ^sګΑ@ rn=G/xh3 i<~u٤ٕE.FV%ea3 Jo"fpn5>? dPsZZ؝MlI-趷  5#@O{690Г:3m?gOݭ߱zCél][ q6 (q1?w 3̉'@q省v:T-HJioİ ž/;i *CZx2$ zD]@Уo{?X6Ϝ\-珞=֝,W>\ՠC̟P%K[lUCZ~9c 1#`n*CvݡzӲ i9&?nEfH賯9|:j @3(:dF[+,zV@=h7L@=`W֩b!> us'YpDMYF0^auGL+.e{ⲇl3/עL!?&[6no`C;^vJ[Ƒ Z8T$zqe+a1XN[[ "/bpj5Pz%*[@CNs^BkR+TIJ1 &4elbjJCX[j.EZ,LWkdbSTZO۽'Q]Vi*_w%Gz\坱zplk{ˎHiNfw6S(@Jr3ɑ2aPTVc@WAtPs>E Vv8 2Vr t]M(K>:|q>q>_^,ybJK77o$>LϗBRš++´z('Snn@jI9Rغj4&wơ 4Rdm7eKム$IZ84UrUQaFD-ZU=>$byK*l{G}0"X+8-)$~*p ޷vH$YvE`ci:)%i;PE ^ B(Gnz Pšɛ@ד3 K_vpF3QfJQPxtQg⦬uWZY!Z͞qm;cCXzzԒu-pzFW~Ds}d)bIJTeb08_ݻ!M@tP1գB]bQbX_\̼R-ƴ]d TuaNY=t-rQsBDžSkjpԚBAH t!ԣ2O޴@TL>l:IavJv"rƝ cuX5ZCn"2% {aDdYA rO}I'>é5񕫳ɢZBggCTJY{qv~m0i3N- Sftph9N:6^ڢvg/q?ݛ"_߄j'ů提.y1VGFn_>H{8:$ KG}O/O˺m0c<;/NVy[s>5Lwmů^]*w(Y I04x/Ňڽf[-~HZZΨ#+ZvLK5?=Mo=OTr=Dl[sК1ÓzuhǞmKCkڽFkA-y)~v#[o5oH,g>/ﻚ dQ9a@1l][ NE'Ody~sdTfZfIHAFB䫁댶{c 생m;s2Zm+˱'\([omp5[k\/>ڝ6c6RK;(cvҏNXr*5`yzEءP*0lɂJ8[*N^X=Ы " -U@]q$_ Tҽm%Srz*A+TB6*iPW lm905F۷N[mi@;;jۆqLxkUo~`aӠl0jq0[٫:]wh>Hƻ%[2G]+SԖ́dkt6KeG*jn&k1oOKyerYYIkn=Cw/޵#f&yCgld I:Bd%uvK-'QԢ)Ydh)]bj0Z?ӀJEV`sxrF!)#1''A\qX"AʪY$:s w8Y(A y1 MQ2,Gywv`Ú`Ay lKv POcD IRKLC]}2R]f^oI׳(gi&lM;Y0bLMO[=3Z~v7u~( 9:(% OLgb뮗>jSNQu0K擸UYbm\NVzJS*TNK;&dS?{TMn1ŻsUrc{M܋8IU=LG|VA_S<'-'cFCY8#>i#Hvv >s Q&R[@S}-2Ĥ3x&/>WR gmf&?Qr|]aqڱ'iPo9Fg,؎!J^t1`X(t5 k##[{-/}rL)s[r#NwJ ^t>t>TVtOwYUO\׬93R+id3\|Tjq{TbR79-nU/mDc4Eu3ea;z|g$PAƴSaWanC9(QAw軅[J Iirb<f\q"QIP%bS)MU*4hՈdV҆tk0/=yAIGTX|1)*Lc_^\<;k(RO_כN1o9x`̟À@(9w]X@">Md1}+fxz]w0c5jt =Jlӂ+We\r? .xb26 1z*lS!DQC {ĎroO z6$(%8W~SpR\ØWo'pCxy9++YMO# 2n_awTB,%%tJЪݴi(9 DbH!j Hctm+۾Mw|ǷO YB?TGU#Y?Tdx:!}$6N$UY9,3qPE( ރMs5 WN? aWFSiLH`"Z3 ۫ $J9 W-U=)Ʀ6-e.1-U!#$Uin Z-U/s˰3Z*47QS݂u+JH).$ 8^Z$5F2 8kAAU=.f\iʔʨ5< Ee-0Pc1| DḟqO^i6-,^.*j\Q܋ 4H`L=QLXH(˖29vRrp 20vAQ ̨.\6B8łb\X'nqG%e:@ZYr)'\Ԫ<=w,HoloRp-ܼ+|oo-iI\+B#]wN[ cMC]ս_e2-M)|+ X:Bp8H!Vlh&ϡ@(TON6^`v>A[Q">0 (lO`>Qפ'0iަF5KE;#var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004216334615137306253017713 0ustar rootrootJan 31 04:24:00 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 04:24:00 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:24:01 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 04:24:01 crc kubenswrapper[4931]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 04:24:01 crc kubenswrapper[4931]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 04:24:01 crc kubenswrapper[4931]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 04:24:01 crc kubenswrapper[4931]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 04:24:01 crc kubenswrapper[4931]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 04:24:01 crc kubenswrapper[4931]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.640708 4931 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649408 4931 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649449 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649460 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649469 4931 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649477 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649486 4931 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649499 4931 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649509 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649518 4931 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649526 4931 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649535 4931 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649545 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649554 4931 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649562 4931 feature_gate.go:330] unrecognized feature gate: Example Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649571 4931 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649581 4931 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649590 4931 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649598 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649608 4931 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649616 4931 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649625 4931 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649634 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649644 4931 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649653 4931 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649662 4931 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649671 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649680 4931 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649689 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649698 4931 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649719 4931 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649763 4931 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649775 4931 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649785 4931 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649794 4931 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649806 4931 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649818 4931 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649828 4931 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649837 4931 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649845 4931 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649856 4931 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649867 4931 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649876 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649886 4931 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649894 4931 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649904 4931 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649913 4931 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649922 4931 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649930 4931 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649938 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649946 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649955 4931 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649963 4931 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649971 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649979 4931 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649987 4931 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.649995 4931 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650003 4931 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650011 4931 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650020 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650030 4931 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650040 4931 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650049 4931 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650058 4931 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650067 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650075 4931 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650085 4931 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650093 4931 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650102 4931 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650112 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650121 4931 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.650131 4931 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651015 4931 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651042 4931 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651058 4931 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651070 4931 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651082 4931 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651092 4931 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651105 4931 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651117 4931 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651138 4931 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651148 4931 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651158 4931 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651171 4931 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651181 4931 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651191 4931 flags.go:64] FLAG: --cgroup-root="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651200 4931 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651210 4931 flags.go:64] FLAG: --client-ca-file="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651220 4931 flags.go:64] FLAG: --cloud-config="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651229 4931 flags.go:64] FLAG: --cloud-provider="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651239 4931 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651251 4931 flags.go:64] FLAG: --cluster-domain="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651260 4931 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651270 4931 flags.go:64] FLAG: --config-dir="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651279 4931 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651289 4931 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651301 4931 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651310 4931 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651320 4931 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651329 4931 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651339 4931 flags.go:64] FLAG: --contention-profiling="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651348 4931 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651358 4931 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651367 4931 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651377 4931 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651388 4931 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651398 4931 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651407 4931 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651417 4931 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651426 4931 flags.go:64] FLAG: --enable-server="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651435 4931 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651447 4931 flags.go:64] FLAG: --event-burst="100" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651457 4931 flags.go:64] FLAG: --event-qps="50" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651466 4931 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651476 4931 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651485 4931 flags.go:64] FLAG: --eviction-hard="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651497 4931 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651506 4931 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651516 4931 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651529 4931 flags.go:64] FLAG: --eviction-soft="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651538 4931 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651548 4931 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651557 4931 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651567 4931 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651576 4931 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651585 4931 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651594 4931 flags.go:64] FLAG: --feature-gates="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651617 4931 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651627 4931 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651636 4931 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651658 4931 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651668 4931 flags.go:64] FLAG: --healthz-port="10248" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651677 4931 flags.go:64] FLAG: --help="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651687 4931 flags.go:64] FLAG: --hostname-override="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651698 4931 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651707 4931 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651717 4931 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651758 4931 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651770 4931 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651782 4931 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651793 4931 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651803 4931 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651813 4931 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651822 4931 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651845 4931 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651855 4931 flags.go:64] FLAG: --kube-reserved="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651864 4931 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651874 4931 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651883 4931 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651892 4931 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651901 4931 flags.go:64] FLAG: --lock-file="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651910 4931 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651919 4931 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651928 4931 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651942 4931 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651953 4931 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651963 4931 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651972 4931 flags.go:64] FLAG: --logging-format="text" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651981 4931 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.651991 4931 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652000 4931 flags.go:64] FLAG: --manifest-url="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652009 4931 flags.go:64] FLAG: --manifest-url-header="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652021 4931 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652033 4931 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652110 4931 flags.go:64] FLAG: --max-pods="110" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652124 4931 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652137 4931 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652149 4931 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652176 4931 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652189 4931 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652201 4931 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652214 4931 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652236 4931 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652258 4931 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652268 4931 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652288 4931 flags.go:64] FLAG: --pod-cidr="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652299 4931 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652311 4931 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652320 4931 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652331 4931 flags.go:64] FLAG: --pods-per-core="0" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652343 4931 flags.go:64] FLAG: --port="10250" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652355 4931 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652367 4931 flags.go:64] FLAG: --provider-id="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652379 4931 flags.go:64] FLAG: --qos-reserved="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652391 4931 flags.go:64] FLAG: --read-only-port="10255" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652403 4931 flags.go:64] FLAG: --register-node="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652414 4931 flags.go:64] FLAG: --register-schedulable="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652424 4931 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652440 4931 flags.go:64] FLAG: --registry-burst="10" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652450 4931 flags.go:64] FLAG: --registry-qps="5" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652459 4931 flags.go:64] FLAG: --reserved-cpus="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652470 4931 flags.go:64] FLAG: --reserved-memory="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652482 4931 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652492 4931 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652502 4931 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652511 4931 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652520 4931 flags.go:64] FLAG: --runonce="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652529 4931 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652539 4931 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652549 4931 flags.go:64] FLAG: --seccomp-default="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652558 4931 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652567 4931 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652576 4931 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652585 4931 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652595 4931 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652604 4931 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652614 4931 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652623 4931 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652632 4931 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652641 4931 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652651 4931 flags.go:64] FLAG: --system-cgroups="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652660 4931 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652675 4931 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652684 4931 flags.go:64] FLAG: --tls-cert-file="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652693 4931 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652706 4931 flags.go:64] FLAG: --tls-min-version="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652715 4931 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652756 4931 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652766 4931 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652775 4931 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652785 4931 flags.go:64] FLAG: --v="2" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652796 4931 flags.go:64] FLAG: --version="false" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652807 4931 flags.go:64] FLAG: --vmodule="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652818 4931 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.652828 4931 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653051 4931 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653063 4931 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653073 4931 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653082 4931 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653091 4931 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653100 4931 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653109 4931 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653118 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653128 4931 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653139 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653149 4931 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653159 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653167 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653176 4931 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653185 4931 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653194 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653202 4931 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653210 4931 feature_gate.go:330] unrecognized feature gate: Example Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653218 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653226 4931 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653233 4931 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653245 4931 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653255 4931 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653263 4931 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653271 4931 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653279 4931 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653287 4931 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653295 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653303 4931 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653311 4931 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653318 4931 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653326 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653334 4931 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653342 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653381 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653389 4931 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653397 4931 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653405 4931 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653416 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653425 4931 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653433 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653441 4931 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653449 4931 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653457 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653465 4931 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653472 4931 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653480 4931 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653488 4931 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653496 4931 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653504 4931 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653513 4931 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653521 4931 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653530 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653538 4931 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653546 4931 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653554 4931 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653562 4931 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653569 4931 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653578 4931 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653586 4931 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653594 4931 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653603 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653611 4931 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653619 4931 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653626 4931 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653634 4931 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653645 4931 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653654 4931 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653665 4931 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653674 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.653683 4931 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.653696 4931 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.668951 4931 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.669026 4931 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669158 4931 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669185 4931 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669195 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669207 4931 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669216 4931 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669224 4931 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669232 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669240 4931 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669248 4931 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669257 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669264 4931 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669272 4931 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669280 4931 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669289 4931 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669297 4931 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669304 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669312 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669320 4931 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669329 4931 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669338 4931 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669347 4931 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669355 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669363 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669370 4931 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669378 4931 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669386 4931 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669394 4931 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669403 4931 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669448 4931 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669460 4931 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669469 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669477 4931 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669486 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669494 4931 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669505 4931 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669513 4931 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669521 4931 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669528 4931 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669536 4931 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669544 4931 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669552 4931 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669559 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669570 4931 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669579 4931 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669588 4931 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669596 4931 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669606 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669614 4931 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669622 4931 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669631 4931 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669640 4931 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669648 4931 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669657 4931 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669665 4931 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669673 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669681 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669689 4931 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669696 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669704 4931 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669712 4931 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669719 4931 feature_gate.go:330] unrecognized feature gate: Example Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669755 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669765 4931 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669775 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669784 4931 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669793 4931 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669800 4931 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669808 4931 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669816 4931 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669824 4931 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.669835 4931 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.669850 4931 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670125 4931 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670137 4931 feature_gate.go:330] unrecognized feature gate: Example Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670146 4931 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670154 4931 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670162 4931 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670170 4931 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670178 4931 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670185 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670193 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670201 4931 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670209 4931 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670216 4931 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670224 4931 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670234 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670242 4931 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670250 4931 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670257 4931 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670265 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670273 4931 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670282 4931 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670290 4931 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670297 4931 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670307 4931 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670317 4931 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670327 4931 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670335 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670343 4931 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670351 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670359 4931 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670367 4931 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670375 4931 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670383 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670393 4931 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670402 4931 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670411 4931 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670420 4931 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670428 4931 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670435 4931 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670444 4931 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670451 4931 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670460 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670468 4931 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670476 4931 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670484 4931 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670492 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670501 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670509 4931 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670516 4931 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670525 4931 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670533 4931 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670541 4931 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670549 4931 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670557 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670564 4931 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670572 4931 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670582 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670590 4931 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670600 4931 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670610 4931 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670621 4931 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670631 4931 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670641 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670649 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670658 4931 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670667 4931 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670675 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670683 4931 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670691 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670700 4931 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670708 4931 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.670723 4931 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.670757 4931 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.671054 4931 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.677098 4931 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.677250 4931 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.679688 4931 server.go:997] "Starting client certificate rotation" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.679757 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.680004 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 00:07:28.681043979 +0000 UTC Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.680117 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.704986 4931 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.708097 4931 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.710193 4931 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.723170 4931 log.go:25] "Validated CRI v1 runtime API" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.760715 4931 log.go:25] "Validated CRI v1 image API" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.763299 4931 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.768427 4931 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-04-20-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.768459 4931 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.789040 4931 manager.go:217] Machine: {Timestamp:2026-01-31 04:24:01.784547372 +0000 UTC m=+0.593776286 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e984073f-fa07-4ec7-ab9e-f3b72b6e8f33 BootID:d62aa0b2-fc7e-4980-9739-9ae59578d075 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:50:da:4d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:50:da:4d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fc:af:69 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:df:37:62 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1f:e5:c2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:98:1b:7e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5a:1d:f5:2f:4e:4a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2e:89:33:84:a3:8e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.789332 4931 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.789467 4931 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.791499 4931 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.791908 4931 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.791980 4931 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.792319 4931 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.792520 4931 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.793932 4931 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.794012 4931 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.794934 4931 state_mem.go:36] "Initialized new in-memory state store" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.795458 4931 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.799754 4931 kubelet.go:418] "Attempting to sync node with API server" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.799808 4931 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.799868 4931 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.799895 4931 kubelet.go:324] "Adding apiserver pod source" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.799914 4931 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.807209 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.807298 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.807229 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.807341 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.808765 4931 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.809652 4931 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.811940 4931 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813382 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813404 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813412 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813419 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813430 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813439 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813447 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813457 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813465 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813479 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813505 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.813513 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.815835 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.816392 4931 server.go:1280] "Started kubelet" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.818264 4931 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:01 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.820710 4931 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.820655 4931 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.821691 4931 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.826080 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.826186 4931 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.826280 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:05:12.826255658 +0000 UTC Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.826594 4931 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.826543 4931 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.826905 4931 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.826503 4931 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.830415 4931 factory.go:55] Registering systemd factory Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.830454 4931 factory.go:221] Registration of the systemd container factory successfully Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.831818 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.831888 4931 server.go:460] "Adding debug handlers to kubelet server" Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.831909 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.831947 4931 factory.go:153] Registering CRI-O factory Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.831991 4931 factory.go:221] Registration of the crio container factory successfully Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.832283 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="200ms" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.832298 4931 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.832355 4931 factory.go:103] Registering Raw factory Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.832406 4931 manager.go:1196] Started watching for new ooms in manager Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.832659 4931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fb6236b1230e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 04:24:01.816359138 +0000 UTC m=+0.625588022,LastTimestamp:2026-01-31 04:24:01.816359138 +0000 UTC m=+0.625588022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.833772 4931 manager.go:319] Starting recovery of all containers Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.845871 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.845966 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846027 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846053 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846082 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846106 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846131 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846154 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846180 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846241 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846268 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846293 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846311 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846333 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846352 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846372 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846422 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846440 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846457 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846480 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846552 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846572 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846595 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846613 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846631 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846648 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846668 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846689 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846705 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846757 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846786 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846808 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846830 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846853 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846881 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846906 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846929 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.846953 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847047 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847081 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847105 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847128 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847154 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847181 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847207 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847231 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847303 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847331 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847355 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847382 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847405 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847431 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847467 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847497 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847527 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847553 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847582 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847609 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847633 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847657 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847682 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847707 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847772 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847799 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847822 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847846 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847873 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847898 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847925 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847947 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.847975 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848000 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848023 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848048 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848071 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848093 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848116 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848141 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848165 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848187 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848212 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848236 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848259 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848286 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848310 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848334 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848363 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848388 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848412 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848438 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848462 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848485 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848510 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848533 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848559 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.848583 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849134 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849169 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849198 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849225 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849250 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849272 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849297 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849322 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849354 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849381 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849408 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849434 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849461 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849488 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849514 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849542 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849569 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849595 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849619 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849643 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849669 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849694 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849717 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849784 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849807 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849826 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849846 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849867 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849886 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849904 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849923 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849941 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.849961 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854183 4931 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854263 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854300 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854324 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854345 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854364 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854381 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854401 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854418 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854437 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854455 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854473 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854491 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854524 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854542 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854562 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854580 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854597 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854615 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854633 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854651 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854673 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854695 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854714 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854766 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854791 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854810 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854829 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854847 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854866 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854884 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854904 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854924 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854975 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.854995 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855015 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855035 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855053 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855071 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855089 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855107 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855133 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855152 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855179 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855199 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855219 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855237 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855270 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855288 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855307 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855332 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855350 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855369 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855390 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855408 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855426 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855445 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855463 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855481 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855501 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855521 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855538 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855556 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855574 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855599 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855621 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855640 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855659 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855682 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855701 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855749 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855778 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855802 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855821 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855839 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855858 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855877 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855910 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855929 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855947 4931 reconstruct.go:97] "Volume reconstruction finished" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.855960 4931 reconciler.go:26] "Reconciler: start to sync state" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.871465 4931 manager.go:324] Recovery completed Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.887383 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.890027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.890066 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.890077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.891536 4931 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.891568 4931 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.891592 4931 state_mem.go:36] "Initialized new in-memory state store" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.893768 4931 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.895443 4931 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.895491 4931 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.895523 4931 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.895565 4931 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 04:24:01 crc kubenswrapper[4931]: W0131 04:24:01.898202 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.898281 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.912547 4931 policy_none.go:49] "None policy: Start" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.913289 4931 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.913312 4931 state_mem.go:35] "Initializing new in-memory state store" Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.927181 4931 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.974645 4931 manager.go:334] "Starting Device Plugin manager" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.975185 4931 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.975202 4931 server.go:79] "Starting device plugin registration server" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.975817 4931 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.975861 4931 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.976192 4931 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.976278 4931 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.976285 4931 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 04:24:01 crc kubenswrapper[4931]: E0131 04:24:01.982770 4931 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.996137 4931 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.996223 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.997656 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.997704 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.997720 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.997932 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.998843 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.998899 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.999047 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.999077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.999093 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.999231 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.999383 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.999408 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.999905 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.999937 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:01 crc kubenswrapper[4931]: I0131 04:24:01.999950 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000076 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000096 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000110 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000119 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000180 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000240 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000247 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000267 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000271 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000917 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000935 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.000943 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.001022 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.001543 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.001568 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.001912 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.001932 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.001941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.002371 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.002386 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.002402 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.002390 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.002410 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.002430 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.002621 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.002659 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.003901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.003934 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.003950 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: E0131 04:24:02.033105 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="400ms" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061359 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061408 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061435 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061460 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061481 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061504 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061526 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061546 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061568 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061608 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061672 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061710 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061770 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061785 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.061849 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.076607 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.079859 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.079887 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.079898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.079921 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:24:02 crc kubenswrapper[4931]: E0131 04:24:02.080320 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162555 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162782 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162807 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162825 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162842 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162858 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162872 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162854 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162885 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162907 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163128 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162945 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162964 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162980 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162977 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.162925 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163253 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163449 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163397 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163481 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163503 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163510 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163623 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163664 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163672 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163527 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163545 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163693 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163714 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.163816 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.280769 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.282509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.282558 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.282581 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.282619 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:24:02 crc kubenswrapper[4931]: E0131 04:24:02.283182 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.330446 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.336594 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.356811 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.371178 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.379792 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:02 crc kubenswrapper[4931]: W0131 04:24:02.386303 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-40147ee66d08fcbb57c56f349379cac3768cf44db1ac854153f1e07a86af08e9 WatchSource:0}: Error finding container 40147ee66d08fcbb57c56f349379cac3768cf44db1ac854153f1e07a86af08e9: Status 404 returned error can't find the container with id 40147ee66d08fcbb57c56f349379cac3768cf44db1ac854153f1e07a86af08e9 Jan 31 04:24:02 crc kubenswrapper[4931]: W0131 04:24:02.387767 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6e2f3c7b778f8f16161cf3fb528a5e67835b5c5117e083e7e6a91095b844f4b2 WatchSource:0}: Error finding container 6e2f3c7b778f8f16161cf3fb528a5e67835b5c5117e083e7e6a91095b844f4b2: Status 404 returned error can't find the container with id 6e2f3c7b778f8f16161cf3fb528a5e67835b5c5117e083e7e6a91095b844f4b2 Jan 31 04:24:02 crc kubenswrapper[4931]: W0131 04:24:02.403984 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a79421dfdfd24b76c6d79ec28b371fc6319895e87b19238643cae7370b8afd24 WatchSource:0}: Error finding container a79421dfdfd24b76c6d79ec28b371fc6319895e87b19238643cae7370b8afd24: Status 404 returned error can't find the container with id a79421dfdfd24b76c6d79ec28b371fc6319895e87b19238643cae7370b8afd24 Jan 31 04:24:02 crc kubenswrapper[4931]: W0131 04:24:02.405880 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3a44b63ae935edbba58ef5aabaab56d93f3eca65bb193720528d2d2726879c45 WatchSource:0}: Error finding container 3a44b63ae935edbba58ef5aabaab56d93f3eca65bb193720528d2d2726879c45: Status 404 returned error can't find the container with id 3a44b63ae935edbba58ef5aabaab56d93f3eca65bb193720528d2d2726879c45 Jan 31 04:24:02 crc kubenswrapper[4931]: W0131 04:24:02.412904 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2a0a701b5eb391a590fc22e2a230bcde4b9259e9e84f7032452d1e169e440b3d WatchSource:0}: Error finding container 2a0a701b5eb391a590fc22e2a230bcde4b9259e9e84f7032452d1e169e440b3d: Status 404 returned error can't find the container with id 2a0a701b5eb391a590fc22e2a230bcde4b9259e9e84f7032452d1e169e440b3d Jan 31 04:24:02 crc kubenswrapper[4931]: E0131 04:24:02.434808 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="800ms" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.684038 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.686517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.686579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.686597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.686644 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:24:02 crc kubenswrapper[4931]: E0131 04:24:02.687344 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.819076 4931 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:02 crc kubenswrapper[4931]: W0131 04:24:02.824374 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:02 crc kubenswrapper[4931]: E0131 04:24:02.824493 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.826772 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:15:27.518602617 +0000 UTC Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.900860 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6e2f3c7b778f8f16161cf3fb528a5e67835b5c5117e083e7e6a91095b844f4b2"} Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.903297 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"40147ee66d08fcbb57c56f349379cac3768cf44db1ac854153f1e07a86af08e9"} Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.904675 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2a0a701b5eb391a590fc22e2a230bcde4b9259e9e84f7032452d1e169e440b3d"} Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.906630 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3a44b63ae935edbba58ef5aabaab56d93f3eca65bb193720528d2d2726879c45"} Jan 31 04:24:02 crc kubenswrapper[4931]: I0131 04:24:02.907684 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a79421dfdfd24b76c6d79ec28b371fc6319895e87b19238643cae7370b8afd24"} Jan 31 04:24:03 crc kubenswrapper[4931]: W0131 04:24:03.048251 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:03 crc kubenswrapper[4931]: E0131 04:24:03.048367 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:03 crc kubenswrapper[4931]: W0131 04:24:03.130837 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:03 crc kubenswrapper[4931]: E0131 04:24:03.130940 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:03 crc kubenswrapper[4931]: E0131 04:24:03.237266 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="1.6s" Jan 31 04:24:03 crc kubenswrapper[4931]: W0131 04:24:03.308243 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:03 crc kubenswrapper[4931]: E0131 04:24:03.308362 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.488189 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.489845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.489888 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.489918 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.489944 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:24:03 crc kubenswrapper[4931]: E0131 04:24:03.490437 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.811267 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 04:24:03 crc kubenswrapper[4931]: E0131 04:24:03.812443 4931 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.819100 4931 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.827119 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:47:56.082875865 +0000 UTC Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.912274 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca" exitCode=0 Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.912361 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca"} Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.912447 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.913499 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.913537 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.913547 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.915188 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.916160 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.916184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.916197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.916812 4931 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7a0544d5c0e24ffca37b0da8653c6c188d545e025d20d1190ff7f43fc572f773" exitCode=0 Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.916878 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7a0544d5c0e24ffca37b0da8653c6c188d545e025d20d1190ff7f43fc572f773"} Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.917068 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.919092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.919125 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.919138 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.919551 4931 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a" exitCode=0 Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.919616 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a"} Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.919745 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.920667 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.920708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.920746 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.922313 4931 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5" exitCode=0 Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.922398 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5"} Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.922478 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.925574 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.925604 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.925617 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.929691 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41"} Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.929758 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062"} Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.929773 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b"} Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.929797 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488"} Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.929903 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.931569 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.931595 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:03 crc kubenswrapper[4931]: I0131 04:24:03.931605 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.819451 4931 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.827448 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:34:45.510850744 +0000 UTC Jan 31 04:24:04 crc kubenswrapper[4931]: E0131 04:24:04.839260 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="3.2s" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.936119 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa"} Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.936173 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed"} Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.936185 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f"} Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.938552 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"970aaff4cf619b86c9fc878350e984b2671d6ae9a5cd42f2a0e54d6b291183c8"} Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.938607 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.939612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.939643 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.939654 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.940880 4931 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112" exitCode=0 Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.940939 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112"} Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.941164 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.942404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.942428 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.942438 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.944966 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1"} Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.945014 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc"} Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.945022 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.945029 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded"} Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.945043 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.946054 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.946086 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.946095 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.946225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.946262 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:04 crc kubenswrapper[4931]: I0131 04:24:04.946276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.091380 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.093230 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.093267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.093281 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.093309 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:24:05 crc kubenswrapper[4931]: E0131 04:24:05.093865 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 31 04:24:05 crc kubenswrapper[4931]: W0131 04:24:05.596735 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 31 04:24:05 crc kubenswrapper[4931]: E0131 04:24:05.596828 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.828422 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:19:32.820420548 +0000 UTC Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.951143 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7"} Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.951202 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8"} Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.951342 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.952568 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.952608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.952625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.953233 4931 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40" exitCode=0 Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.953328 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.953325 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40"} Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.953371 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.953397 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.953542 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.954874 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.954902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.954913 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.955064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.955122 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.955148 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.955452 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.955479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:05 crc kubenswrapper[4931]: I0131 04:24:05.955490 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.828776 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:18:13.524534518 +0000 UTC Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.959319 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a"} Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.959369 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69"} Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.959384 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c"} Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.959405 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def"} Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.959383 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.959469 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.960547 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.960579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:06 crc kubenswrapper[4931]: I0131 04:24:06.960591 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.001536 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.001919 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.003738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.003771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.003785 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.675923 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.829491 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:57:42.310106244 +0000 UTC Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.968761 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d"} Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.968782 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.968863 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.968981 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.970234 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.970267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.970324 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.970630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.970683 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:07 crc kubenswrapper[4931]: I0131 04:24:07.970706 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.010134 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.294988 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.297148 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.297205 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.297223 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.297262 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.308368 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.830502 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:04:15.861349354 +0000 UTC Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.971459 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.971852 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.973029 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.973097 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.973028 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.973124 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.973144 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:08 crc kubenswrapper[4931]: I0131 04:24:08.973167 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.358674 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.358891 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.360209 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.360237 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.360248 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.362638 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.764463 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.764826 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.767171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.767255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.767282 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.774485 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.831085 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:38:11.886064699 +0000 UTC Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.974022 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.974089 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.975599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.975645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.975662 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.975664 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.975702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:09 crc kubenswrapper[4931]: I0131 04:24:09.975788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.001628 4931 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.001759 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.178442 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.178693 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.180350 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.180398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.180415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.661936 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.669925 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.831331 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:44:41.178517756 +0000 UTC Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.977149 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.978506 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.978676 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:10 crc kubenswrapper[4931]: I0131 04:24:10.978698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.831475 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:58:30.898868213 +0000 UTC Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.981891 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.982034 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.982219 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:11 crc kubenswrapper[4931]: E0131 04:24:11.982922 4931 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.983689 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.983796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.983816 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.983870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.983893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:11 crc kubenswrapper[4931]: I0131 04:24:11.983823 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:12 crc kubenswrapper[4931]: I0131 04:24:12.832108 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:35:37.122248607 +0000 UTC Jan 31 04:24:13 crc kubenswrapper[4931]: I0131 04:24:13.833045 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:52:07.832908903 +0000 UTC Jan 31 04:24:14 crc kubenswrapper[4931]: I0131 04:24:14.833906 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:02:25.582589238 +0000 UTC Jan 31 04:24:15 crc kubenswrapper[4931]: I0131 04:24:15.820770 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 04:24:15 crc kubenswrapper[4931]: I0131 04:24:15.820841 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 04:24:15 crc kubenswrapper[4931]: I0131 04:24:15.830843 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 04:24:15 crc kubenswrapper[4931]: I0131 04:24:15.831044 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 04:24:15 crc kubenswrapper[4931]: I0131 04:24:15.834961 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:50:55.112613898 +0000 UTC Jan 31 04:24:16 crc kubenswrapper[4931]: I0131 04:24:16.835942 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 15:46:36.469973924 +0000 UTC Jan 31 04:24:17 crc kubenswrapper[4931]: I0131 04:24:17.685449 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:17 crc kubenswrapper[4931]: I0131 04:24:17.685704 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:17 crc kubenswrapper[4931]: I0131 04:24:17.686645 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 04:24:17 crc kubenswrapper[4931]: I0131 04:24:17.686791 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 04:24:17 crc kubenswrapper[4931]: I0131 04:24:17.687844 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:17 crc kubenswrapper[4931]: I0131 04:24:17.687898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:17 crc kubenswrapper[4931]: I0131 04:24:17.687917 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:17 crc kubenswrapper[4931]: I0131 04:24:17.703336 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:17 crc kubenswrapper[4931]: I0131 04:24:17.837135 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:23:08.251871179 +0000 UTC Jan 31 04:24:18 crc kubenswrapper[4931]: I0131 04:24:18.002073 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:18 crc kubenswrapper[4931]: I0131 04:24:18.002974 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 04:24:18 crc kubenswrapper[4931]: I0131 04:24:18.003056 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 04:24:18 crc kubenswrapper[4931]: I0131 04:24:18.003662 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:18 crc kubenswrapper[4931]: I0131 04:24:18.003745 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:18 crc kubenswrapper[4931]: I0131 04:24:18.003771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:18 crc kubenswrapper[4931]: I0131 04:24:18.309637 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 04:24:18 crc kubenswrapper[4931]: I0131 04:24:18.309782 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 04:24:18 crc kubenswrapper[4931]: I0131 04:24:18.838553 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:03:15.34911147 +0000 UTC Jan 31 04:24:19 crc kubenswrapper[4931]: I0131 04:24:19.779042 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:19 crc kubenswrapper[4931]: I0131 04:24:19.779190 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:19 crc kubenswrapper[4931]: I0131 04:24:19.780279 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:19 crc kubenswrapper[4931]: I0131 04:24:19.780314 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:19 crc kubenswrapper[4931]: I0131 04:24:19.780325 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:19 crc kubenswrapper[4931]: I0131 04:24:19.838989 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:54:08.022096608 +0000 UTC Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.003025 4931 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.003093 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.803021 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.807665 4931 trace.go:236] Trace[547826598]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 04:24:06.077) (total time: 14730ms): Jan 31 04:24:20 crc kubenswrapper[4931]: Trace[547826598]: ---"Objects listed" error: 14730ms (04:24:20.807) Jan 31 04:24:20 crc kubenswrapper[4931]: Trace[547826598]: [14.730159844s] [14.730159844s] END Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.807709 4931 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.808786 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.809286 4931 trace.go:236] Trace[150225448]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 04:24:05.810) (total time: 14998ms): Jan 31 04:24:20 crc kubenswrapper[4931]: Trace[150225448]: ---"Objects listed" error: 14998ms (04:24:20.809) Jan 31 04:24:20 crc kubenswrapper[4931]: Trace[150225448]: [14.99831138s] [14.99831138s] END Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.809363 4931 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.809277 4931 trace.go:236] Trace[1159498149]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 04:24:10.091) (total time: 10717ms): Jan 31 04:24:20 crc kubenswrapper[4931]: Trace[1159498149]: ---"Objects listed" error: 10717ms (04:24:20.809) Jan 31 04:24:20 crc kubenswrapper[4931]: Trace[1159498149]: [10.717212286s] [10.717212286s] END Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.809400 4931 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.809331 4931 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.810300 4931 trace.go:236] Trace[1855372490]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 04:24:06.497) (total time: 14312ms): Jan 31 04:24:20 crc kubenswrapper[4931]: Trace[1855372490]: ---"Objects listed" error: 14312ms (04:24:20.810) Jan 31 04:24:20 crc kubenswrapper[4931]: Trace[1855372490]: [14.312696174s] [14.312696174s] END Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.810337 4931 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.812668 4931 apiserver.go:52] "Watching apiserver" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.817184 4931 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.817653 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.818170 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.818473 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.818475 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.818534 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.818608 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.818637 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.819163 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.819192 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.818839 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.821343 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.821988 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.822135 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.821845 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.821882 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.821894 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.821913 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.821917 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.821958 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.827806 4931 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.831957 4931 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.839219 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 17:33:57.586809623 +0000 UTC Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.863952 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.867034 4931 csr.go:261] certificate signing request csr-p9z22 is approved, waiting to be issued Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.879033 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.883415 4931 csr.go:257] certificate signing request csr-p9z22 is issued Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.891374 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.908176 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.910935 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.910990 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911018 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911056 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911085 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911115 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911146 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911190 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911222 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911251 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911279 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911308 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911338 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911501 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.911514 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.912036 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.912444 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.912617 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.912625 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.912633 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.912663 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.912889 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.913070 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.913081 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.913304 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.913348 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916172 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916313 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916365 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916398 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916425 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916473 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916504 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916530 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916560 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916593 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916621 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916644 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916674 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916690 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916707 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916833 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916913 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916948 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.916978 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917013 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917044 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917074 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917099 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917130 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917157 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917181 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917181 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917207 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917235 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917263 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917286 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917313 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917340 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917367 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917394 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917421 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917656 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917930 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.917906 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918037 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918065 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918110 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918140 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918285 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918356 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918574 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918618 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918682 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918647 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918747 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918830 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918841 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.918924 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919143 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919206 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919216 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919165 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919401 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919547 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919603 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919643 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919678 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919763 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919814 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919840 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919839 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919853 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919925 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.919990 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920038 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920067 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920088 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920282 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920335 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920346 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920460 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920387 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920649 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920742 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920793 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920735 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920833 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920881 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920888 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920912 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.920931 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.921233 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.921286 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.921332 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.921032 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.921309 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929079 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929202 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929153 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929171 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929269 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929154 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929332 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.921372 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929363 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929416 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929444 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929471 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929497 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929527 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929566 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929594 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929621 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929749 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929783 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929811 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929837 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929863 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929888 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929915 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929940 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929964 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.929992 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930063 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930107 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930135 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930162 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930187 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930210 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930234 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930250 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930259 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930287 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930306 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930326 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930348 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930373 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930399 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930425 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930451 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930473 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930497 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930521 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930522 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930575 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930603 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930628 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930655 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930680 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930705 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930746 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930757 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930770 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930772 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930868 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930872 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930925 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.930949 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931018 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931049 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931076 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931105 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931099 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931201 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931242 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931281 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931317 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931349 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931382 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931442 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931478 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931518 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931550 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931583 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931547 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931615 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931651 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931666 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931740 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931759 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931767 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931817 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931846 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931877 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931904 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931930 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931955 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.931979 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932013 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932039 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932059 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932053 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932076 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932121 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932137 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932157 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932175 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932192 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932209 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932228 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932246 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932262 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932278 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932294 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932314 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932343 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932371 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932396 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932417 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932438 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932463 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932484 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932503 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932522 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932543 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932562 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932584 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932605 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932627 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932645 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932664 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932701 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934458 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934512 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934543 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934589 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934612 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934638 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934660 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934689 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934781 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934812 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.934838 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.935170 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.936125 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.942290 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932284 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932318 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.932573 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.933114 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.933196 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.936823 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.936875 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.936967 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.937065 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.948705 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.937536 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.938083 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.940790 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:24:21.440765898 +0000 UTC m=+20.249994772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.940956 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.941058 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.941174 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.941329 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.941533 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.941773 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.942007 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.942024 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.942280 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.942362 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.942940 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.943738 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.943763 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.943843 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.944090 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.944114 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.944199 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.944718 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.945143 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.945181 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.945378 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.945656 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.945794 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.946005 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949123 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.946091 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.946149 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.947159 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.947360 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.947435 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.947471 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949256 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.947695 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.947866 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.947893 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.948263 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.948486 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.948501 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.948799 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949091 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949105 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.946147 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949398 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949356 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949471 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949440 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949655 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949664 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949696 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949874 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.949807 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950147 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950171 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950189 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950215 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950252 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950263 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950302 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950484 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950555 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950714 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950271 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950866 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950903 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950932 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950955 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.950986 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951005 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951027 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951048 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951073 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951160 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951217 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951237 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951257 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951311 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951510 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951600 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951627 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951653 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951789 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.951839 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.952031 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.952054 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.952270 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.952533 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.952591 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.952618 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.952907 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:21.452884744 +0000 UTC m=+20.262113628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.952957 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.953258 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.953466 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.953553 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.953888 4931 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.954039 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.954097 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.955243 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.956035 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.956212 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.956337 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.956662 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.956783 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.956847 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.956931 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:21.456911837 +0000 UTC m=+20.266140711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.956973 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.956993 4931 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957007 4931 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957018 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957032 4931 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957044 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957056 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957056 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957068 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957080 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957092 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957104 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957127 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957138 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957150 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957161 4931 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957172 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957182 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957193 4931 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957203 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957213 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957225 4931 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957235 4931 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957244 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957254 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957264 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957273 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957284 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957296 4931 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957307 4931 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957318 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957323 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957328 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957377 4931 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957384 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957390 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957419 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957430 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957443 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957454 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957464 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957475 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957485 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957495 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957504 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957513 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957523 4931 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957536 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957546 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957562 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957571 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957580 4931 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957590 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957599 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957608 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957617 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957627 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957635 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957645 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957653 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957662 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957671 4931 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957681 4931 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957690 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957700 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957709 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957733 4931 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957742 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957752 4931 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957762 4931 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957771 4931 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957781 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957790 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957799 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957809 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957818 4931 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957827 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957836 4931 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957846 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957855 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957864 4931 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957874 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957883 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957892 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957903 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957915 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957925 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957934 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957943 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957953 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957964 4931 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957973 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957982 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.957990 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958000 4931 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958009 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958018 4931 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958027 4931 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958036 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958046 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958055 4931 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958064 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958076 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958084 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958094 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958102 4931 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958112 4931 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958120 4931 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958129 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958139 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958147 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958157 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958166 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958176 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958184 4931 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958193 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958202 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958211 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958220 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958229 4931 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958240 4931 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958249 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958257 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958266 4931 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958275 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958283 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958292 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958300 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958310 4931 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958318 4931 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958327 4931 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958336 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958345 4931 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958353 4931 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958366 4931 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958375 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958384 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958393 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958400 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958409 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958418 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958426 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958435 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958445 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958459 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958467 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958475 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.958484 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.968056 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.968102 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.970357 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.970386 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.970399 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.970463 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:21.470444898 +0000 UTC m=+20.279673772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.973913 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.974973 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.975061 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.975406 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.975634 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.975749 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.975843 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.976172 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.976544 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.977734 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.978274 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.978618 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.978898 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.979104 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.979139 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.979167 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.979203 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.979258 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.979529 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.979998 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.980029 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.980046 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:20 crc kubenswrapper[4931]: E0131 04:24:20.980119 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:21.480095759 +0000 UTC m=+20.289324643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.981212 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.980708 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.981505 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.981585 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.982996 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.983241 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.983347 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.983376 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.983430 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.983525 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.983650 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.983762 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.983785 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.983986 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.984574 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.984933 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.985102 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.985820 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.986557 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.987378 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.987784 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.994115 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:20 crc kubenswrapper[4931]: I0131 04:24:20.996065 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.013934 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-79gv8"] Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.014242 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-79gv8" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.016548 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.016537 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.018021 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.018140 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.027005 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.032772 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.053504 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.058985 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059021 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5dc2\" (UniqueName: \"kubernetes.io/projected/5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39-kube-api-access-k5dc2\") pod \"node-resolver-79gv8\" (UID: \"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\") " pod="openshift-dns/node-resolver-79gv8" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059038 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39-hosts-file\") pod \"node-resolver-79gv8\" (UID: \"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\") " pod="openshift-dns/node-resolver-79gv8" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059054 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059115 4931 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059127 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059137 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059146 4931 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059154 4931 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059162 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059171 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059179 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059187 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059195 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059204 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059212 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059220 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059227 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059235 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059243 4931 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059252 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059260 4931 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059268 4931 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059276 4931 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059283 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059291 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059299 4931 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059306 4931 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059314 4931 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059327 4931 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059335 4931 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059344 4931 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059154 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059354 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059402 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059414 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059425 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059434 4931 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059445 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059455 4931 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059464 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059473 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059489 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059499 4931 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059508 4931 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059518 4931 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059528 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059537 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059546 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059556 4931 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059567 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.059341 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.075130 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.112317 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.126394 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.144588 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.146810 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.159862 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5dc2\" (UniqueName: \"kubernetes.io/projected/5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39-kube-api-access-k5dc2\") pod \"node-resolver-79gv8\" (UID: \"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\") " pod="openshift-dns/node-resolver-79gv8" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.159893 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39-hosts-file\") pod \"node-resolver-79gv8\" (UID: \"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\") " pod="openshift-dns/node-resolver-79gv8" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.159948 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39-hosts-file\") pod \"node-resolver-79gv8\" (UID: \"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\") " pod="openshift-dns/node-resolver-79gv8" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.159794 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.167042 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.167894 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-349857bef4c5e28697009a5860a4b78fc9bc17ca343c92c5d76d2a29207a14ab WatchSource:0}: Error finding container 349857bef4c5e28697009a5860a4b78fc9bc17ca343c92c5d76d2a29207a14ab: Status 404 returned error can't find the container with id 349857bef4c5e28697009a5860a4b78fc9bc17ca343c92c5d76d2a29207a14ab Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.175206 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.178027 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5dc2\" (UniqueName: \"kubernetes.io/projected/5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39-kube-api-access-k5dc2\") pod \"node-resolver-79gv8\" (UID: \"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\") " pod="openshift-dns/node-resolver-79gv8" Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.178407 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-4d72397b491204aa20c7b28de903eaf3ae0fa058c59f31ab0ebb0c1b102c1bfe WatchSource:0}: Error finding container 4d72397b491204aa20c7b28de903eaf3ae0fa058c59f31ab0ebb0c1b102c1bfe: Status 404 returned error can't find the container with id 4d72397b491204aa20c7b28de903eaf3ae0fa058c59f31ab0ebb0c1b102c1bfe Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.186202 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ebef0e9ca1c21560767da167162096004dc7c3f4f412541fa12b3928059b8121 WatchSource:0}: Error finding container ebef0e9ca1c21560767da167162096004dc7c3f4f412541fa12b3928059b8121: Status 404 returned error can't find the container with id ebef0e9ca1c21560767da167162096004dc7c3f4f412541fa12b3928059b8121 Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.333982 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-79gv8" Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.360815 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e593e10_e9d9_4d9c_82e3_4d51ce5e2f39.slice/crio-14142028e98d3cffff65272a902ef66e70d3970e8f5c3ff3be6c0968006fc5ad WatchSource:0}: Error finding container 14142028e98d3cffff65272a902ef66e70d3970e8f5c3ff3be6c0968006fc5ad: Status 404 returned error can't find the container with id 14142028e98d3cffff65272a902ef66e70d3970e8f5c3ff3be6c0968006fc5ad Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.463131 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.463201 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.463227 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.463322 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.463340 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:24:22.463307197 +0000 UTC m=+21.272536121 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.463377 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:22.463364597 +0000 UTC m=+21.272593471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.463439 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.463474 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:22.463468188 +0000 UTC m=+21.272697062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.563702 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.563781 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.564006 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.564039 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.564049 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.564113 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:22.564097192 +0000 UTC m=+21.373326066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.564185 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.564258 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.564285 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:21 crc kubenswrapper[4931]: E0131 04:24:21.564330 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:22.564317094 +0000 UTC m=+21.373546018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.679947 4931 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680199 4931 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680232 4931 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680250 4931 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680275 4931 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680288 4931 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680294 4931 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680318 4931 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680318 4931 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680252 4931 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680337 4931 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680276 4931 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680228 4931 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680227 4931 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680262 4931 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680199 4931 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: W0131 04:24:21.680299 4931 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.839592 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:53:53.125846811 +0000 UTC Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.924752 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 04:19:20 +0000 UTC, rotation deadline is 2026-10-25 18:16:53.694049723 +0000 UTC Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.924883 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6421h52m31.769171122s for next certificate rotation Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.930385 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.931111 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.932864 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.933555 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.934786 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.935414 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.936244 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.937470 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.938289 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.939593 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.940226 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.942661 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.943262 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.943794 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.945129 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.946021 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.947308 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.949067 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.950089 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.950810 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.951036 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.951951 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.952581 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.953064 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.954119 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.954509 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.955614 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.956245 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.957129 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.957689 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.958641 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.959130 4931 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.959228 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.963604 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.964500 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.965127 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.965441 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.967526 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.970225 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.971041 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.972908 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.973959 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.974911 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.975255 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.976418 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.978386 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.980231 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.980955 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.982424 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.983541 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.985550 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.986499 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.987500 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.988859 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.989717 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.991382 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.991454 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 04:24:21 crc kubenswrapper[4931]: I0131 04:24:21.992138 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.006496 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.014286 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ebef0e9ca1c21560767da167162096004dc7c3f4f412541fa12b3928059b8121"} Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.014372 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.015822 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.016478 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.018079 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7" exitCode=255 Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.018102 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7"} Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.019622 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-79gv8" event={"ID":"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39","Type":"ContainerStarted","Data":"c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185"} Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.019647 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-79gv8" event={"ID":"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39","Type":"ContainerStarted","Data":"14142028e98d3cffff65272a902ef66e70d3970e8f5c3ff3be6c0968006fc5ad"} Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.021800 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2"} Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.021895 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc"} Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.021927 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4d72397b491204aa20c7b28de903eaf3ae0fa058c59f31ab0ebb0c1b102c1bfe"} Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.023761 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.025148 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d"} Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.025284 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"349857bef4c5e28697009a5860a4b78fc9bc17ca343c92c5d76d2a29207a14ab"} Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.030314 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.038189 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.051892 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.063740 4931 scope.go:117] "RemoveContainer" containerID="e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.068455 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.069986 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.084439 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.096130 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.105006 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.116543 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.127223 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.139173 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.140895 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.153275 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.162526 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.169608 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.178604 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.190800 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.202118 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.472126 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.472204 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.472228 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.472276 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:24:24.472261367 +0000 UTC m=+23.281490241 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.472320 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.472386 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.472413 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:24.472391941 +0000 UTC m=+23.281620905 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.472434 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:24.472419881 +0000 UTC m=+23.281648855 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.500626 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.573157 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.573263 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.573381 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.573406 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.573423 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.573438 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.573444 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.573451 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.573511 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:24.573493719 +0000 UTC m=+23.382722603 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.573530 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:24.57352298 +0000 UTC m=+23.382751864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.721775 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-r5kkh"] Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.722333 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-pcg8z"] Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.722919 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.729821 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.730081 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.730102 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.730263 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.730372 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.731574 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.731822 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.732678 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.733919 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.734676 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.735158 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.751334 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.753558 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.775830 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c7d60e8b-e113-470f-93ff-a8a795074642-rootfs\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.775868 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7d60e8b-e113-470f-93ff-a8a795074642-proxy-tls\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.775883 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-cni-dir\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.775909 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-run-k8s-cni-cncf-io\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.775930 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvrq\" (UniqueName: \"kubernetes.io/projected/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-kube-api-access-kzvrq\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.775950 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-var-lib-cni-bin\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.775966 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-hostroot\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776046 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-conf-dir\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776116 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-cni-binary-copy\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776138 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-run-multus-certs\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776262 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-system-cni-dir\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776285 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-socket-dir-parent\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776306 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-run-netns\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776322 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-daemon-config\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776345 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nqpj\" (UniqueName: \"kubernetes.io/projected/c7d60e8b-e113-470f-93ff-a8a795074642-kube-api-access-6nqpj\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776358 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-var-lib-cni-multus\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776372 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-var-lib-kubelet\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776387 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-etc-kubernetes\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776402 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7d60e8b-e113-470f-93ff-a8a795074642-mcd-auth-proxy-config\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776420 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-cnibin\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.776433 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-os-release\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.787327 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.809493 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.833709 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.840103 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:50:26.058752491 +0000 UTC Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.841361 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.841692 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.859708 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877010 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-cni-dir\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877063 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c7d60e8b-e113-470f-93ff-a8a795074642-rootfs\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877088 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7d60e8b-e113-470f-93ff-a8a795074642-proxy-tls\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877109 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-run-k8s-cni-cncf-io\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877135 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvrq\" (UniqueName: \"kubernetes.io/projected/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-kube-api-access-kzvrq\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877155 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-var-lib-cni-bin\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877176 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-hostroot\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877194 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-conf-dir\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877214 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-run-multus-certs\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877263 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-cni-binary-copy\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877296 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-system-cni-dir\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877315 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-socket-dir-parent\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877335 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-run-netns\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877354 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-daemon-config\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877389 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nqpj\" (UniqueName: \"kubernetes.io/projected/c7d60e8b-e113-470f-93ff-a8a795074642-kube-api-access-6nqpj\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877410 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-var-lib-cni-multus\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877432 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7d60e8b-e113-470f-93ff-a8a795074642-mcd-auth-proxy-config\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877452 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-cnibin\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877471 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-os-release\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877496 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-var-lib-kubelet\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877517 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-etc-kubernetes\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877586 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-etc-kubernetes\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877690 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-cni-dir\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.877747 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c7d60e8b-e113-470f-93ff-a8a795074642-rootfs\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.878132 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-run-k8s-cni-cncf-io\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.878668 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-var-lib-cni-bin\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.878709 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-hostroot\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.878762 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-conf-dir\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.878808 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-run-multus-certs\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.879308 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-socket-dir-parent\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.879368 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-system-cni-dir\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.879380 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-run-netns\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.879427 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-var-lib-cni-multus\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.879466 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-cnibin\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.879499 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-host-var-lib-kubelet\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.879821 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-os-release\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.879861 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-cni-binary-copy\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.879966 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7d60e8b-e113-470f-93ff-a8a795074642-mcd-auth-proxy-config\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.880061 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-multus-daemon-config\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.883033 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7d60e8b-e113-470f-93ff-a8a795074642-proxy-tls\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.886598 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.896602 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.896710 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.896768 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.896808 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.896849 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:22 crc kubenswrapper[4931]: E0131 04:24:22.896924 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.901065 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nqpj\" (UniqueName: \"kubernetes.io/projected/c7d60e8b-e113-470f-93ff-a8a795074642-kube-api-access-6nqpj\") pod \"machine-config-daemon-pcg8z\" (UID: \"c7d60e8b-e113-470f-93ff-a8a795074642\") " pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.903258 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvrq\" (UniqueName: \"kubernetes.io/projected/0be95b57-6df4-4ba6-88e8-acf405e3d6d2-kube-api-access-kzvrq\") pod \"multus-r5kkh\" (UID: \"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\") " pod="openshift-multus/multus-r5kkh" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.905096 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.918551 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.930790 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.932756 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.939316 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.945431 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.958435 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.972021 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.988810 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.990335 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 04:24:22 crc kubenswrapper[4931]: I0131 04:24:22.999686 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.011788 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.015664 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.030063 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.032222 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2"} Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.033321 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: E0131 04:24:23.042399 4931 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.045679 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r5kkh" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.052515 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.052586 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: W0131 04:24:23.057173 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be95b57_6df4_4ba6_88e8_acf405e3d6d2.slice/crio-29743d643696bf08a68c5824014217ae0eff82c28ad2318a2b99a16d6f29b126 WatchSource:0}: Error finding container 29743d643696bf08a68c5824014217ae0eff82c28ad2318a2b99a16d6f29b126: Status 404 returned error can't find the container with id 29743d643696bf08a68c5824014217ae0eff82c28ad2318a2b99a16d6f29b126 Jan 31 04:24:23 crc kubenswrapper[4931]: W0131 04:24:23.067414 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7d60e8b_e113_470f_93ff_a8a795074642.slice/crio-6605506cce11c409417f310f461eff2c6432e2caeb317e1f6eb8c71447e07ce3 WatchSource:0}: Error finding container 6605506cce11c409417f310f461eff2c6432e2caeb317e1f6eb8c71447e07ce3: Status 404 returned error can't find the container with id 6605506cce11c409417f310f461eff2c6432e2caeb317e1f6eb8c71447e07ce3 Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.068068 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.074680 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.084516 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.098276 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.109904 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.131942 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.141842 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-52fq9"] Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.142489 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78mxr"] Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.142654 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.143448 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.145009 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.145266 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 04:24:23 crc kubenswrapper[4931]: W0131 04:24:23.145459 4931 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 31 04:24:23 crc kubenswrapper[4931]: E0131 04:24:23.145494 4931 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 04:24:23 crc kubenswrapper[4931]: W0131 04:24:23.146770 4931 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.146824 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 04:24:23 crc kubenswrapper[4931]: E0131 04:24:23.146833 4931 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.146788 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.147164 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.147565 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.150151 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.160188 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.168910 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.169151 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180104 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-netd\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180147 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5f2e5660-13d8-4896-bad5-008e165ba847-ovn-node-metrics-cert\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180171 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-system-cni-dir\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180188 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180203 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-openvswitch\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180224 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-cnibin\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180242 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-node-log\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180262 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-config\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180282 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-env-overrides\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180319 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-etc-openvswitch\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180352 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-systemd\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180372 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-ovn\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180339 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180394 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-kubelet\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180413 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-systemd-units\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180426 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-var-lib-openvswitch\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180472 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-bin\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180491 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bpr\" (UniqueName: \"kubernetes.io/projected/5f2e5660-13d8-4896-bad5-008e165ba847-kube-api-access-q8bpr\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180506 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-slash\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180522 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180543 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-script-lib\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180682 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-log-socket\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180749 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-netns\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180790 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-ovn-kubernetes\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180848 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-os-release\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180880 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3448d78c-9a3a-4729-b656-3f3dad829af2-cni-binary-copy\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180899 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3448d78c-9a3a-4729-b656-3f3dad829af2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.180925 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9hr\" (UniqueName: \"kubernetes.io/projected/3448d78c-9a3a-4729-b656-3f3dad829af2-kube-api-access-sz9hr\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.199857 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.211820 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.214746 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.232404 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.255483 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.271373 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281674 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-netns\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281713 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-os-release\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281743 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3448d78c-9a3a-4729-b656-3f3dad829af2-cni-binary-copy\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281759 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3448d78c-9a3a-4729-b656-3f3dad829af2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281774 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz9hr\" (UniqueName: \"kubernetes.io/projected/3448d78c-9a3a-4729-b656-3f3dad829af2-kube-api-access-sz9hr\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281791 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-ovn-kubernetes\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281809 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-system-cni-dir\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281824 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281838 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-netd\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281852 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5f2e5660-13d8-4896-bad5-008e165ba847-ovn-node-metrics-cert\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281868 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-openvswitch\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281865 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-netns\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281933 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-cnibin\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281883 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-cnibin\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.281978 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-node-log\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282013 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-config\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282031 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-env-overrides\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282066 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-etc-openvswitch\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282099 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-systemd\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282122 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-ovn\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282138 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-kubelet\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282153 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-systemd-units\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282172 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-var-lib-openvswitch\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282201 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-bin\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282229 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bpr\" (UniqueName: \"kubernetes.io/projected/5f2e5660-13d8-4896-bad5-008e165ba847-kube-api-access-q8bpr\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282244 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-slash\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282300 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282318 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-script-lib\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282351 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-log-socket\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282420 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-log-socket\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282457 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-node-log\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282768 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282867 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3448d78c-9a3a-4729-b656-3f3dad829af2-cni-binary-copy\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282918 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-ovn-kubernetes\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282946 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-var-lib-openvswitch\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.282971 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-etc-openvswitch\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283009 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-systemd\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283035 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-ovn\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283057 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-kubelet\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283084 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-systemd-units\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283115 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-slash\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283142 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-bin\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283146 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-system-cni-dir\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283178 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-env-overrides\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283228 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283285 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3448d78c-9a3a-4729-b656-3f3dad829af2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283376 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-openvswitch\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283387 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-netd\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.283439 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3448d78c-9a3a-4729-b656-3f3dad829af2-os-release\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.284132 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-script-lib\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.286232 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.290197 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5f2e5660-13d8-4896-bad5-008e165ba847-ovn-node-metrics-cert\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.298384 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz9hr\" (UniqueName: \"kubernetes.io/projected/3448d78c-9a3a-4729-b656-3f3dad829af2-kube-api-access-sz9hr\") pod \"multus-additional-cni-plugins-52fq9\" (UID: \"3448d78c-9a3a-4729-b656-3f3dad829af2\") " pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.307123 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.319636 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.333602 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.348538 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.381553 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.399009 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.411508 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.422753 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.442879 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.455086 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-52fq9" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.455427 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: W0131 04:24:23.470454 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3448d78c_9a3a_4729_b656_3f3dad829af2.slice/crio-b275b8fc75c21b4dd260600733043a4395ff0bd78c0694dbfbe9fab65e731150 WatchSource:0}: Error finding container b275b8fc75c21b4dd260600733043a4395ff0bd78c0694dbfbe9fab65e731150: Status 404 returned error can't find the container with id b275b8fc75c21b4dd260600733043a4395ff0bd78c0694dbfbe9fab65e731150 Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.485307 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.498333 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.516858 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.537030 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.556410 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.575955 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:23 crc kubenswrapper[4931]: I0131 04:24:23.840528 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:56:58.123142342 +0000 UTC Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.036609 4931 generic.go:334] "Generic (PLEG): container finished" podID="3448d78c-9a3a-4729-b656-3f3dad829af2" containerID="11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603" exitCode=0 Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.036690 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" event={"ID":"3448d78c-9a3a-4729-b656-3f3dad829af2","Type":"ContainerDied","Data":"11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603"} Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.036740 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" event={"ID":"3448d78c-9a3a-4729-b656-3f3dad829af2","Type":"ContainerStarted","Data":"b275b8fc75c21b4dd260600733043a4395ff0bd78c0694dbfbe9fab65e731150"} Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.038178 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r5kkh" event={"ID":"0be95b57-6df4-4ba6-88e8-acf405e3d6d2","Type":"ContainerStarted","Data":"f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1"} Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.038222 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r5kkh" event={"ID":"0be95b57-6df4-4ba6-88e8-acf405e3d6d2","Type":"ContainerStarted","Data":"29743d643696bf08a68c5824014217ae0eff82c28ad2318a2b99a16d6f29b126"} Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.039840 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73"} Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.043964 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb"} Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.044105 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9"} Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.044204 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"6605506cce11c409417f310f461eff2c6432e2caeb317e1f6eb8c71447e07ce3"} Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.044576 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.048576 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.064565 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.075990 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.090513 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.103458 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.117772 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.134539 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.154542 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.167215 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.177890 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.189258 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.189334 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.193492 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-config\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.203349 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.220982 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.236389 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.249525 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.262430 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.276291 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.295920 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.298395 4931 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.298426 4931 projected.go:194] Error preparing data for projected volume kube-api-access-q8bpr for pod openshift-ovn-kubernetes/ovnkube-node-78mxr: failed to sync configmap cache: timed out waiting for the condition Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.298498 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f2e5660-13d8-4896-bad5-008e165ba847-kube-api-access-q8bpr podName:5f2e5660-13d8-4896-bad5-008e165ba847 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:24.798480372 +0000 UTC m=+23.607709246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q8bpr" (UniqueName: "kubernetes.io/projected/5f2e5660-13d8-4896-bad5-008e165ba847-kube-api-access-q8bpr") pod "ovnkube-node-78mxr" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847") : failed to sync configmap cache: timed out waiting for the condition Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.321175 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.338781 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.353581 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.367654 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.373535 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.385291 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.398961 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.415398 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.459415 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.496156 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.496288 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.496320 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.496345 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:24:28.496317265 +0000 UTC m=+27.305546139 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.496423 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.496460 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.496492 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:28.496473469 +0000 UTC m=+27.305702423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.496544 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:28.496523341 +0000 UTC m=+27.305752215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.597867 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.598198 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.598260 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.598283 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.598398 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:28.598352038 +0000 UTC m=+27.407580922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.598240 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.598634 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.598995 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.599127 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.599339 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:28.599310793 +0000 UTC m=+27.408539707 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.801679 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bpr\" (UniqueName: \"kubernetes.io/projected/5f2e5660-13d8-4896-bad5-008e165ba847-kube-api-access-q8bpr\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.812787 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bpr\" (UniqueName: \"kubernetes.io/projected/5f2e5660-13d8-4896-bad5-008e165ba847-kube-api-access-q8bpr\") pod \"ovnkube-node-78mxr\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.841595 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:18:17.277058211 +0000 UTC Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.896110 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.896265 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.896356 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.896468 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.896657 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:24 crc kubenswrapper[4931]: E0131 04:24:24.896788 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:24 crc kubenswrapper[4931]: I0131 04:24:24.960858 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:24 crc kubenswrapper[4931]: W0131 04:24:24.980018 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f2e5660_13d8_4896_bad5_008e165ba847.slice/crio-3fc2c69ac5190965e31f4add0aa0f4113fd124e32bb9986f4eb580cd4aa176e8 WatchSource:0}: Error finding container 3fc2c69ac5190965e31f4add0aa0f4113fd124e32bb9986f4eb580cd4aa176e8: Status 404 returned error can't find the container with id 3fc2c69ac5190965e31f4add0aa0f4113fd124e32bb9986f4eb580cd4aa176e8 Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.006988 4931 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.051125 4931 generic.go:334] "Generic (PLEG): container finished" podID="3448d78c-9a3a-4729-b656-3f3dad829af2" containerID="b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c" exitCode=0 Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.051229 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" event={"ID":"3448d78c-9a3a-4729-b656-3f3dad829af2","Type":"ContainerDied","Data":"b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c"} Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.054042 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"3fc2c69ac5190965e31f4add0aa0f4113fd124e32bb9986f4eb580cd4aa176e8"} Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.080261 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.099019 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.123429 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.134513 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.152699 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.169681 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.188370 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.209784 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.224860 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.238604 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.252500 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.277195 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.297812 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.457348 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8p6fj"] Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.457926 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.459564 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.460063 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.460675 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.461302 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.474984 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.486485 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.497562 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.511422 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.513065 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2z8r\" (UniqueName: \"kubernetes.io/projected/fe320e12-71d8-45f5-8634-ee326cbdb4f5-kube-api-access-x2z8r\") pod \"node-ca-8p6fj\" (UID: \"fe320e12-71d8-45f5-8634-ee326cbdb4f5\") " pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.513178 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe320e12-71d8-45f5-8634-ee326cbdb4f5-serviceca\") pod \"node-ca-8p6fj\" (UID: \"fe320e12-71d8-45f5-8634-ee326cbdb4f5\") " pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.513321 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe320e12-71d8-45f5-8634-ee326cbdb4f5-host\") pod \"node-ca-8p6fj\" (UID: \"fe320e12-71d8-45f5-8634-ee326cbdb4f5\") " pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.526762 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.546987 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.576575 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.604653 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.614212 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe320e12-71d8-45f5-8634-ee326cbdb4f5-serviceca\") pod \"node-ca-8p6fj\" (UID: \"fe320e12-71d8-45f5-8634-ee326cbdb4f5\") " pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.614252 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe320e12-71d8-45f5-8634-ee326cbdb4f5-host\") pod \"node-ca-8p6fj\" (UID: \"fe320e12-71d8-45f5-8634-ee326cbdb4f5\") " pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.614311 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2z8r\" (UniqueName: \"kubernetes.io/projected/fe320e12-71d8-45f5-8634-ee326cbdb4f5-kube-api-access-x2z8r\") pod \"node-ca-8p6fj\" (UID: \"fe320e12-71d8-45f5-8634-ee326cbdb4f5\") " pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.614420 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe320e12-71d8-45f5-8634-ee326cbdb4f5-host\") pod \"node-ca-8p6fj\" (UID: \"fe320e12-71d8-45f5-8634-ee326cbdb4f5\") " pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.615550 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe320e12-71d8-45f5-8634-ee326cbdb4f5-serviceca\") pod \"node-ca-8p6fj\" (UID: \"fe320e12-71d8-45f5-8634-ee326cbdb4f5\") " pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.620748 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.633557 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2z8r\" (UniqueName: \"kubernetes.io/projected/fe320e12-71d8-45f5-8634-ee326cbdb4f5-kube-api-access-x2z8r\") pod \"node-ca-8p6fj\" (UID: \"fe320e12-71d8-45f5-8634-ee326cbdb4f5\") " pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.635269 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.649264 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.665998 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.677443 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.688240 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.776651 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8p6fj" Jan 31 04:24:25 crc kubenswrapper[4931]: W0131 04:24:25.787949 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe320e12_71d8_45f5_8634_ee326cbdb4f5.slice/crio-1dd4c612e4b5259257f07cf98fd453c0f96ada54dc8b06be32b5c40962b57412 WatchSource:0}: Error finding container 1dd4c612e4b5259257f07cf98fd453c0f96ada54dc8b06be32b5c40962b57412: Status 404 returned error can't find the container with id 1dd4c612e4b5259257f07cf98fd453c0f96ada54dc8b06be32b5c40962b57412 Jan 31 04:24:25 crc kubenswrapper[4931]: I0131 04:24:25.841943 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:35:07.803022346 +0000 UTC Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.058043 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8p6fj" event={"ID":"fe320e12-71d8-45f5-8634-ee326cbdb4f5","Type":"ContainerStarted","Data":"7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924"} Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.058107 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8p6fj" event={"ID":"fe320e12-71d8-45f5-8634-ee326cbdb4f5","Type":"ContainerStarted","Data":"1dd4c612e4b5259257f07cf98fd453c0f96ada54dc8b06be32b5c40962b57412"} Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.062519 4931 generic.go:334] "Generic (PLEG): container finished" podID="3448d78c-9a3a-4729-b656-3f3dad829af2" containerID="160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29" exitCode=0 Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.062605 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" event={"ID":"3448d78c-9a3a-4729-b656-3f3dad829af2","Type":"ContainerDied","Data":"160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29"} Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.066009 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8" exitCode=0 Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.066059 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8"} Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.080248 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.093601 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.108523 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.121671 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.134847 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.149036 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.163463 4931 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.163471 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.175889 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.195926 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.220774 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.244177 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.259120 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.275528 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.294489 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.308300 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.320969 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.362828 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.400246 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.437590 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.480640 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.535519 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.571903 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.598144 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.639655 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.676550 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.719689 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.756203 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.795498 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:26Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.842904 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:43:14.813481098 +0000 UTC Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.896707 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.896707 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:26 crc kubenswrapper[4931]: E0131 04:24:26.896881 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:26 crc kubenswrapper[4931]: I0131 04:24:26.896704 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:26 crc kubenswrapper[4931]: E0131 04:24:26.897219 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:26 crc kubenswrapper[4931]: E0131 04:24:26.897078 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.010348 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.018266 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.022880 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.035791 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.064817 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.075427 4931 generic.go:334] "Generic (PLEG): container finished" podID="3448d78c-9a3a-4729-b656-3f3dad829af2" containerID="22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a" exitCode=0 Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.075492 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" event={"ID":"3448d78c-9a3a-4729-b656-3f3dad829af2","Type":"ContainerDied","Data":"22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.082965 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.083043 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.083068 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.083088 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.083107 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.083127 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.093060 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.114232 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.135989 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.152963 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.170948 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.191264 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.208937 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.209492 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.212319 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.212383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.212406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.212559 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.241358 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.250346 4931 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.250693 4931 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.252481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.252562 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.252600 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.252624 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.252637 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: E0131 04:24:27.273231 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.279542 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.279566 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.279575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.279590 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.279599 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: E0131 04:24:27.305637 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.307027 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.311606 4931 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.313361 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.313403 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.313416 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.313434 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.313451 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: E0131 04:24:27.330397 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.334795 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.334837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.334848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.334867 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.334880 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: E0131 04:24:27.351267 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.355946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.355978 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.355990 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.356010 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.356024 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.360061 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: E0131 04:24:27.373035 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: E0131 04:24:27.373261 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.375428 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.375550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.375648 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.375766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.375879 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.400060 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.444824 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.478446 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.479502 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.479627 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.479789 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.479886 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.479918 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.516372 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.560331 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.582229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.582287 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.582307 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.582330 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.582346 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.600653 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.637639 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.677611 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.684129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.684249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.684338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.684410 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.684471 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.721795 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.759713 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.787524 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.787581 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.787599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.787624 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.787642 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.794284 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.838410 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.843634 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:11:53.782795616 +0000 UTC Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.878254 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.889858 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.889901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.889917 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.889937 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.889953 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.920539 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.961996 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.992658 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.992750 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.992769 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.992790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.992805 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:27Z","lastTransitionTime":"2026-01-31T04:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:27 crc kubenswrapper[4931]: I0131 04:24:27.996839 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:27Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.036324 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.089549 4931 generic.go:334] "Generic (PLEG): container finished" podID="3448d78c-9a3a-4729-b656-3f3dad829af2" containerID="3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43" exitCode=0 Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.089602 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" event={"ID":"3448d78c-9a3a-4729-b656-3f3dad829af2","Type":"ContainerDied","Data":"3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43"} Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.094411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.094458 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.094473 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.094494 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.094511 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:28Z","lastTransitionTime":"2026-01-31T04:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.111445 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.127375 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.160435 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.198558 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.198589 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.198599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.198614 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.198624 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:28Z","lastTransitionTime":"2026-01-31T04:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.199618 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.237412 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.287333 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.301947 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.301993 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.302014 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.302059 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.302087 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:28Z","lastTransitionTime":"2026-01-31T04:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.335772 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.356561 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.395071 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.404790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.404820 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.404832 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.404852 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.404865 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:28Z","lastTransitionTime":"2026-01-31T04:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.438565 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.477786 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.508042 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.508129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.508142 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.508193 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.508209 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:28Z","lastTransitionTime":"2026-01-31T04:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.516514 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.546308 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.546490 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:24:36.546457155 +0000 UTC m=+35.355686039 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.546755 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.546816 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.547058 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.547127 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:36.547111122 +0000 UTC m=+35.356340006 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.547554 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.547628 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:36.547610795 +0000 UTC m=+35.356839889 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.556663 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.597265 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.611016 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.611077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.611101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.611123 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.611138 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:28Z","lastTransitionTime":"2026-01-31T04:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.638555 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:28Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.648441 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.648490 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.648518 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.648596 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:36.648573269 +0000 UTC m=+35.457802173 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.648247 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.648924 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.649038 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.649061 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.649072 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.649128 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:36.649115003 +0000 UTC m=+35.458343887 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.716477 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.716519 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.716534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.716549 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.716560 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:28Z","lastTransitionTime":"2026-01-31T04:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.819245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.819289 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.819299 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.819316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.819338 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:28Z","lastTransitionTime":"2026-01-31T04:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.844051 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:11:27.180297555 +0000 UTC Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.895915 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.895915 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.896038 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.896296 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.896000 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:28 crc kubenswrapper[4931]: E0131 04:24:28.896429 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.922219 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.922286 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.922305 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.922336 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:28 crc kubenswrapper[4931]: I0131 04:24:28.922356 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:28Z","lastTransitionTime":"2026-01-31T04:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.024805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.024840 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.024849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.024862 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.024871 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.095978 4931 generic.go:334] "Generic (PLEG): container finished" podID="3448d78c-9a3a-4729-b656-3f3dad829af2" containerID="0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff" exitCode=0 Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.096023 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" event={"ID":"3448d78c-9a3a-4729-b656-3f3dad829af2","Type":"ContainerDied","Data":"0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.102712 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.116527 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.127411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.127476 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.127500 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.127530 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.127555 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.135860 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.159562 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.175617 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.190742 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.211204 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.224076 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.230806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.230853 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.230866 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.230885 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.230901 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.245820 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.258206 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.271144 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.279693 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.291768 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.305898 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.328958 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.333342 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.333374 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.333385 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.333398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.333408 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.348178 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.436572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.437053 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.437065 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.437081 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.437093 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.539883 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.539941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.539958 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.539981 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.539998 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.642819 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.642927 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.642946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.643005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.643059 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.745443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.745508 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.745552 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.745583 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.745605 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.844789 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:13:57.568886718 +0000 UTC Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.847611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.847664 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.847673 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.847689 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.847702 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.950116 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.950184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.950218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.950285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:29 crc kubenswrapper[4931]: I0131 04:24:29.950296 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:29Z","lastTransitionTime":"2026-01-31T04:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.053073 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.053107 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.053118 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.053130 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.053138 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.108956 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" event={"ID":"3448d78c-9a3a-4729-b656-3f3dad829af2","Type":"ContainerStarted","Data":"d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.125815 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.141772 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.155126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.155162 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.155173 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.155208 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.155221 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.162615 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.176320 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.195158 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.212033 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.226433 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.253316 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.261787 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.261825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.261840 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.261862 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.261879 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.294632 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.316343 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.333100 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.348202 4931 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.352409 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.364433 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.364480 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.364492 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.364509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.364522 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.370669 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.389318 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.400858 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.468426 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.468470 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.468478 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.468495 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.468505 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.571101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.571167 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.571179 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.571197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.571209 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.674183 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.674235 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.674245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.674261 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.674273 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.777322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.777357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.777369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.777384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.777393 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.845520 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:35:24.083815029 +0000 UTC Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.880333 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.880382 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.880395 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.880413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.880428 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.895777 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:30 crc kubenswrapper[4931]: E0131 04:24:30.895910 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.895975 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.895798 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:30 crc kubenswrapper[4931]: E0131 04:24:30.896051 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:30 crc kubenswrapper[4931]: E0131 04:24:30.896399 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.983403 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.983446 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.983455 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.983476 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:30 crc kubenswrapper[4931]: I0131 04:24:30.983487 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:30Z","lastTransitionTime":"2026-01-31T04:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.086800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.086828 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.086837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.086850 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.086859 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:31Z","lastTransitionTime":"2026-01-31T04:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.189034 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.189499 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.189511 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.189533 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.189546 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:31Z","lastTransitionTime":"2026-01-31T04:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.291212 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.291246 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.291257 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.291272 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.291282 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:31Z","lastTransitionTime":"2026-01-31T04:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.393737 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.393775 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.393783 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.393797 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.393808 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:31Z","lastTransitionTime":"2026-01-31T04:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.495893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.495956 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.495963 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.495977 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.495986 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:31Z","lastTransitionTime":"2026-01-31T04:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.598048 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.598087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.598098 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.598113 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.598124 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:31Z","lastTransitionTime":"2026-01-31T04:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.700479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.700523 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.700534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.700548 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.700557 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:31Z","lastTransitionTime":"2026-01-31T04:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.803980 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.804045 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.804062 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.804086 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.804103 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:31Z","lastTransitionTime":"2026-01-31T04:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.846213 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:55:31.585076586 +0000 UTC Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.906168 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.906211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.906224 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.906239 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.906251 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:31Z","lastTransitionTime":"2026-01-31T04:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.921602 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.935020 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.951084 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.964970 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:31 crc kubenswrapper[4931]: I0131 04:24:31.981701 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.001800 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.009260 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.009300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.009320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.009341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.009353 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.017541 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.029435 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.042384 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.055036 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.066229 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.077146 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.089232 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.107634 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.112357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.112383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.112393 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.112409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.112419 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.121593 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.123087 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.123140 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.127164 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.145334 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.151764 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.151985 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.159735 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.172648 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.188106 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.200624 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.212556 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.214660 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.214694 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.214704 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.214737 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.214747 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.229138 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.240344 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.256434 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.271393 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.283170 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.293205 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.305538 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.317933 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.317984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.317998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.318019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.318033 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.319806 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.341773 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.355777 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.367150 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.380439 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.399002 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.416276 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.420037 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.420077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.420092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.420111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.420124 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.428838 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.441888 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.455350 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.476479 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.489363 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.505227 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.519330 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.523398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.523438 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.523452 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.523470 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.523481 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.530640 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.543534 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.557421 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.625357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.625401 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.625412 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.625428 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.625443 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.728427 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.728489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.728504 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.728522 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.728535 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.834053 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.834093 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.834102 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.834129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.834139 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.847140 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:17:12.342939899 +0000 UTC Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.896803 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.896856 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.896815 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:32 crc kubenswrapper[4931]: E0131 04:24:32.896957 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:32 crc kubenswrapper[4931]: E0131 04:24:32.897067 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:32 crc kubenswrapper[4931]: E0131 04:24:32.897157 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.938190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.938237 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.938247 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.938264 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:32 crc kubenswrapper[4931]: I0131 04:24:32.938274 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:32Z","lastTransitionTime":"2026-01-31T04:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.041593 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.041678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.041708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.041776 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.041802 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.124922 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.144799 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.144863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.144881 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.144907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.144932 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.248379 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.248434 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.248453 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.248479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.248497 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.351684 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.351755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.351767 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.351785 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.351797 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.455146 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.455245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.455267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.455296 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.455314 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.559252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.559329 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.559349 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.559384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.559405 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.662090 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.662145 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.662155 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.662174 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.662185 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.766188 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.766272 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.766295 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.766328 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.766349 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.847961 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:36:22.575429634 +0000 UTC Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.870256 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.870364 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.870386 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.870417 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.870439 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.974168 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.974267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.974324 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.974363 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:33 crc kubenswrapper[4931]: I0131 04:24:33.974389 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:33Z","lastTransitionTime":"2026-01-31T04:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.078076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.078176 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.078196 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.078224 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.078243 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:34Z","lastTransitionTime":"2026-01-31T04:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.128891 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.182233 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.182302 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.182325 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.182359 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.182381 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:34Z","lastTransitionTime":"2026-01-31T04:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.285521 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.285586 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.285607 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.285637 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.285658 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:34Z","lastTransitionTime":"2026-01-31T04:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.389333 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.389412 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.389442 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.389479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.389502 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:34Z","lastTransitionTime":"2026-01-31T04:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.446242 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7"] Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.447318 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.452048 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.452637 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.477992 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.493575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.493681 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.493709 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.493788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.493816 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:34Z","lastTransitionTime":"2026-01-31T04:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.500692 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.518879 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.521880 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4804e45c-fc53-4c00-973a-fbc401ea2990-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.521990 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4804e45c-fc53-4c00-973a-fbc401ea2990-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.522082 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9m5b\" (UniqueName: \"kubernetes.io/projected/4804e45c-fc53-4c00-973a-fbc401ea2990-kube-api-access-h9m5b\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.522259 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4804e45c-fc53-4c00-973a-fbc401ea2990-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.541491 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.564378 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.592833 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.597625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.597763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.597784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.597818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.597839 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:34Z","lastTransitionTime":"2026-01-31T04:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.611979 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.623344 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4804e45c-fc53-4c00-973a-fbc401ea2990-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.623406 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4804e45c-fc53-4c00-973a-fbc401ea2990-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.623446 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9m5b\" (UniqueName: \"kubernetes.io/projected/4804e45c-fc53-4c00-973a-fbc401ea2990-kube-api-access-h9m5b\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.623502 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4804e45c-fc53-4c00-973a-fbc401ea2990-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.624712 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4804e45c-fc53-4c00-973a-fbc401ea2990-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.625253 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4804e45c-fc53-4c00-973a-fbc401ea2990-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.634073 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4804e45c-fc53-4c00-973a-fbc401ea2990-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.635981 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.652952 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9m5b\" (UniqueName: \"kubernetes.io/projected/4804e45c-fc53-4c00-973a-fbc401ea2990-kube-api-access-h9m5b\") pod \"ovnkube-control-plane-749d76644c-5dkx7\" (UID: \"4804e45c-fc53-4c00-973a-fbc401ea2990\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.670974 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.701467 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.701528 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.701545 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.701570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.701587 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:34Z","lastTransitionTime":"2026-01-31T04:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.708786 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.729784 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.750406 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.771207 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.771700 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.800399 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.805975 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.806024 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.806036 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.806056 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.806067 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:34Z","lastTransitionTime":"2026-01-31T04:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.828456 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.852910 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:19:47.372058703 +0000 UTC Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.855995 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:34Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.896391 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.896559 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:34 crc kubenswrapper[4931]: E0131 04:24:34.896714 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.896586 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:34 crc kubenswrapper[4931]: E0131 04:24:34.896855 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:34 crc kubenswrapper[4931]: E0131 04:24:34.897086 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.909919 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.910085 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.910099 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.910124 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:34 crc kubenswrapper[4931]: I0131 04:24:34.910140 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:34Z","lastTransitionTime":"2026-01-31T04:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.013311 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.013393 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.013418 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.013457 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.013485 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.117925 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.118431 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.118445 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.118466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.118481 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.138978 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/0.log" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.145126 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca" exitCode=1 Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.145179 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.146094 4931 scope.go:117] "RemoveContainer" containerID="1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.147496 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" event={"ID":"4804e45c-fc53-4c00-973a-fbc401ea2990","Type":"ContainerStarted","Data":"79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.147560 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" event={"ID":"4804e45c-fc53-4c00-973a-fbc401ea2990","Type":"ContainerStarted","Data":"44f5080c6e216c0eff948d07cf427eb66a9ea46fa257c06d7d623a384fe59d07"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.171375 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.191897 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.217377 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.223010 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.223056 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.223067 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.223085 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.223098 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.248653 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.276066 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.296880 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.312427 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.326381 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.326445 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.326462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.326486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.326499 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.327362 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.352400 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.370798 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.394523 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.410497 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.425775 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.429941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.430005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.430028 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.430055 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.430081 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.441995 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.483849 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"04:24:34.192278 6254 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:24:34.192315 6254 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:24:34.192333 6254 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:24:34.192355 6254 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:34.192394 6254 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 04:24:34.192401 6254 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 04:24:34.192415 6254 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:34.192412 6254 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:24:34.192429 6254 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 04:24:34.192477 6254 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:34.192497 6254 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:34.192502 6254 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 04:24:34.192522 6254 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:34.192526 6254 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 04:24:34.192535 6254 factory.go:656] Stopping watch factory\\\\nI0131 04:24:34.192556 6254 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.500093 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.543665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.543703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.543738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.543759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.543773 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.646962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.647014 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.647032 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.647056 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.647143 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.750057 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.750103 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.750116 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.750137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.750151 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.852920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.852972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.852985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.853004 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.853020 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.853191 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:03:48.577910365 +0000 UTC Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.951032 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4cc6z"] Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.952860 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:35 crc kubenswrapper[4931]: E0131 04:24:35.953029 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.956351 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.956394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.956408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.956430 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.956444 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:35Z","lastTransitionTime":"2026-01-31T04:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.971165 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:35 crc kubenswrapper[4931]: I0131 04:24:35.986550 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.001608 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.012062 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.026547 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.040909 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.047519 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9trw5\" (UniqueName: \"kubernetes.io/projected/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-kube-api-access-9trw5\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.047628 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.059174 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.059237 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.059249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.059280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.059297 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.069759 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"04:24:34.192278 6254 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:24:34.192315 6254 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:24:34.192333 6254 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:24:34.192355 6254 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:34.192394 6254 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 04:24:34.192401 6254 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 04:24:34.192415 6254 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:34.192412 6254 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:24:34.192429 6254 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 04:24:34.192477 6254 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:34.192497 6254 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:34.192502 6254 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 04:24:34.192522 6254 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:34.192526 6254 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 04:24:34.192535 6254 factory.go:656] Stopping watch factory\\\\nI0131 04:24:34.192556 6254 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.101003 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.115067 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.128472 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.140585 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.149651 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.149746 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9trw5\" (UniqueName: \"kubernetes.io/projected/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-kube-api-access-9trw5\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.149832 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.149935 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs podName:df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:36.649918907 +0000 UTC m=+35.459147781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs") pod "network-metrics-daemon-4cc6z" (UID: "df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.152787 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" event={"ID":"4804e45c-fc53-4c00-973a-fbc401ea2990","Type":"ContainerStarted","Data":"907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.155137 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/0.log" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.156379 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.160043 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.160155 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.161855 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.161876 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.161885 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.161898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.161910 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.178058 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.181124 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9trw5\" (UniqueName: \"kubernetes.io/projected/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-kube-api-access-9trw5\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.191107 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.203586 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.215536 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.227333 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.242434 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.257420 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.265858 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.265904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.265916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.265938 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.265952 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.274687 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.298105 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.317048 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.337183 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"04:24:34.192278 6254 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:24:34.192315 6254 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:24:34.192333 6254 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:24:34.192355 6254 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:34.192394 6254 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 04:24:34.192401 6254 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 04:24:34.192415 6254 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:34.192412 6254 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:24:34.192429 6254 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 04:24:34.192477 6254 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:34.192497 6254 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:34.192502 6254 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 04:24:34.192522 6254 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:34.192526 6254 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 04:24:34.192535 6254 factory.go:656] Stopping watch factory\\\\nI0131 04:24:34.192556 6254 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.350871 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.363593 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.368317 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.368509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.368578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.368611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.368643 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.375035 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.385866 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.402753 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.413971 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.426778 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.437745 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.449007 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.460808 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.471879 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.472024 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.472094 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.472167 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.472240 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.474153 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.555088 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.555299 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.555374 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:24:52.555330247 +0000 UTC m=+51.364559161 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.555445 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.555466 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.555548 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:52.555518431 +0000 UTC m=+51.364747335 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.555633 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.555687 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:52.555676245 +0000 UTC m=+51.364905119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.575676 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.575772 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.575790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.575816 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.575836 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.657155 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.657306 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.657401 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657441 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657507 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs podName:df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:37.657489772 +0000 UTC m=+36.466718886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs") pod "network-metrics-daemon-4cc6z" (UID: "df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657508 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657511 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657631 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657750 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657529 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657876 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657875 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:52.657840601 +0000 UTC m=+51.467069505 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.657971 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:52.657951454 +0000 UTC m=+51.467180368 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.678886 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.678954 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.678972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.679002 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.679022 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.782578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.782645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.782668 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.782696 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.782717 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.853802 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:26:13.846911651 +0000 UTC Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.885951 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.886011 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.886029 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.886054 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.886071 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.896460 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.896639 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.896483 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.896805 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.896483 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:36 crc kubenswrapper[4931]: E0131 04:24:36.896913 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.989468 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.990152 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.990343 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.990538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:36 crc kubenswrapper[4931]: I0131 04:24:36.990788 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:36Z","lastTransitionTime":"2026-01-31T04:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.095409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.095851 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.096010 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.096145 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.096272 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.168054 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/1.log" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.169026 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/0.log" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.173364 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5" exitCode=1 Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.173480 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.173624 4931 scope.go:117] "RemoveContainer" containerID="1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.174863 4931 scope.go:117] "RemoveContainer" containerID="0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5" Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.175176 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.200529 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.200590 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.200608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.200636 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.200659 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.202060 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.228657 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.262147 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1831c5cd976dadae8e1e29c3921fe460dae9800460f9abe67183befcc1c218ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"message\\\":\\\"04:24:34.192278 6254 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:24:34.192315 6254 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:24:34.192333 6254 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:24:34.192355 6254 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:34.192394 6254 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 04:24:34.192401 6254 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 04:24:34.192415 6254 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:34.192412 6254 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:24:34.192429 6254 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 04:24:34.192477 6254 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:34.192497 6254 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:34.192502 6254 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 04:24:34.192522 6254 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:34.192526 6254 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 04:24:34.192535 6254 factory.go:656] Stopping watch factory\\\\nI0131 04:24:34.192556 6254 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"message\\\":\\\"19615025667110816) with []\\\\nI0131 04:24:36.321478 6416 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0131 04:24:36.321577 6416 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 04:24:36.321655 6416 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 04:24:36.322084 6416 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322213 6416 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 04:24:36.322259 6416 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:36.322264 6416 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:36.322306 6416 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:36.322363 6416 factory.go:656] Stopping watch factory\\\\nI0131 04:24:36.322377 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:36.322411 6416 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:36.322417 6416 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:24:36.322424 6416 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322435 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 04:24:36.322501 6416 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.282800 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.304115 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.305238 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.305319 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.305347 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.305377 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.305403 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.323445 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.346101 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.372617 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.391028 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.408958 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.409016 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.409043 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.409082 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.409110 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.427008 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.443294 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.458624 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.471253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.471976 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.472007 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.472044 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.472069 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.509942 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.521644 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.528259 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.528315 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.528332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.528350 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.528666 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.535822 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.558187 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.558454 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.564805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.564856 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.564865 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.564878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.564887 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.578009 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.585869 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.589197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.589270 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.589280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.589294 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.589303 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.591552 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.603277 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.607419 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.607449 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.607461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.607479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.607493 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.627033 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.627187 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.628796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.628826 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.628835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.628850 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.628861 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.672476 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.672685 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.672793 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs podName:df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:39.672771026 +0000 UTC m=+38.481999980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs") pod "network-metrics-daemon-4cc6z" (UID: "df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.731114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.731150 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.731163 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.731177 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.731187 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.833763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.833825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.833837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.833853 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.833863 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.854875 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:57:54.014691047 +0000 UTC Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.895918 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:37 crc kubenswrapper[4931]: E0131 04:24:37.896280 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.936352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.936408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.936420 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.936439 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:37 crc kubenswrapper[4931]: I0131 04:24:37.936452 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:37Z","lastTransitionTime":"2026-01-31T04:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.040101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.040189 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.040212 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.040246 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.040271 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.143662 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.143784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.143805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.143838 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.143866 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.179527 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/1.log" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.246527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.246851 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.246975 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.247044 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.247115 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.299605 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.300364 4931 scope.go:117] "RemoveContainer" containerID="0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5" Jan 31 04:24:38 crc kubenswrapper[4931]: E0131 04:24:38.300525 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.313211 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.328610 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.350262 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.350322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.350345 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.350375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.350400 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.351555 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.375130 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.392127 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.414921 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.440594 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.453831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.453880 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.453895 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.453918 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.453936 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.470564 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"message\\\":\\\"19615025667110816) with []\\\\nI0131 04:24:36.321478 6416 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0131 04:24:36.321577 6416 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 04:24:36.321655 6416 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 04:24:36.322084 6416 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322213 6416 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 04:24:36.322259 6416 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:36.322264 6416 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:36.322306 6416 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:36.322363 6416 factory.go:656] Stopping watch factory\\\\nI0131 04:24:36.322377 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:36.322411 6416 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:36.322417 6416 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:24:36.322424 6416 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322435 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 04:24:36.322501 6416 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.515320 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.533358 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.557205 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.557624 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.557708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.557767 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.557804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.557836 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.576812 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.601081 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.624545 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.647037 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.661192 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.661261 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.661289 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.661325 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.661349 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.669209 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.687853 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.703369 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.719151 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.742399 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.758183 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.765140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.765189 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.765204 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.765225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.765239 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.776673 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.802101 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.826324 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.851822 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.854992 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:05:22.333264622 +0000 UTC Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.872384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.872412 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.872420 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.872447 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.872459 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.877552 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.896862 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.896907 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.897055 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:38 crc kubenswrapper[4931]: E0131 04:24:38.897061 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:38 crc kubenswrapper[4931]: E0131 04:24:38.897236 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:38 crc kubenswrapper[4931]: E0131 04:24:38.897522 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.899593 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.919658 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.942612 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.970594 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.977346 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.977398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.977407 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.977423 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:38 crc kubenswrapper[4931]: I0131 04:24:38.977437 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:38Z","lastTransitionTime":"2026-01-31T04:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.004969 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"message\\\":\\\"19615025667110816) with []\\\\nI0131 04:24:36.321478 6416 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0131 04:24:36.321577 6416 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 04:24:36.321655 6416 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 04:24:36.322084 6416 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322213 6416 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 04:24:36.322259 6416 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:36.322264 6416 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:36.322306 6416 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:36.322363 6416 factory.go:656] Stopping watch factory\\\\nI0131 04:24:36.322377 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:36.322411 6416 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:36.322417 6416 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:24:36.322424 6416 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322435 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 04:24:36.322501 6416 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.041525 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.063503 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.080672 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.080809 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.080831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.080890 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.080916 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:39Z","lastTransitionTime":"2026-01-31T04:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.088647 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.111190 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.184070 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.184126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.184173 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.184190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.184198 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:39Z","lastTransitionTime":"2026-01-31T04:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.288710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.288817 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.288835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.288863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.288881 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:39Z","lastTransitionTime":"2026-01-31T04:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.392287 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.392424 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.392443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.392470 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.392492 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:39Z","lastTransitionTime":"2026-01-31T04:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.497203 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.497280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.497304 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.497331 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.497350 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:39Z","lastTransitionTime":"2026-01-31T04:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.600803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.600884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.600902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.600932 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.600958 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:39Z","lastTransitionTime":"2026-01-31T04:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.703767 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:39 crc kubenswrapper[4931]: E0131 04:24:39.704048 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:39 crc kubenswrapper[4931]: E0131 04:24:39.704176 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs podName:df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:43.704136573 +0000 UTC m=+42.513365477 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs") pod "network-metrics-daemon-4cc6z" (UID: "df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.706811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.706882 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.706907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.706941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.706964 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:39Z","lastTransitionTime":"2026-01-31T04:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.810898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.810959 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.810976 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.811003 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.811023 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:39Z","lastTransitionTime":"2026-01-31T04:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.855687 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:03:52.886768779 +0000 UTC Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.896706 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:39 crc kubenswrapper[4931]: E0131 04:24:39.897059 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.913594 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.913664 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.913691 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.913763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:39 crc kubenswrapper[4931]: I0131 04:24:39.913793 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:39Z","lastTransitionTime":"2026-01-31T04:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.018171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.018238 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.018267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.018300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.018325 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.121613 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.121657 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.121670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.121696 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.121712 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.226147 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.226264 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.226331 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.226406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.226434 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.330024 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.330101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.330152 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.330187 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.330211 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.433688 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.433802 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.433825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.433857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.433877 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.538845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.538931 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.538952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.538980 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.539001 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.642231 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.642302 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.642328 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.642365 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.642385 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.745659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.745760 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.745773 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.745800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.745810 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.848888 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.848940 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.848952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.848974 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.848990 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.856050 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:23:38.570466212 +0000 UTC Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.896530 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:40 crc kubenswrapper[4931]: E0131 04:24:40.896696 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.897239 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:40 crc kubenswrapper[4931]: E0131 04:24:40.897397 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.897256 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:40 crc kubenswrapper[4931]: E0131 04:24:40.897501 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.953137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.953213 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.953225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.953288 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:40 crc kubenswrapper[4931]: I0131 04:24:40.953303 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:40Z","lastTransitionTime":"2026-01-31T04:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.056342 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.056426 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.056450 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.056482 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.056506 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.160251 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.160299 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.160312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.160332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.160344 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.263368 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.263499 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.263526 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.263573 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.263601 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.368109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.368189 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.368206 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.368239 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.368262 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.470746 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.470792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.470803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.470825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.470837 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.574676 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.574751 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.574763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.574781 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.574792 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.678264 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.678353 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.678378 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.678416 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.678442 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.781708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.781780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.781791 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.781806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.781818 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.878896 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:46:39.270676179 +0000 UTC Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.885253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.885312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.885329 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.885352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.885366 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.896251 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:41 crc kubenswrapper[4931]: E0131 04:24:41.896395 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.916790 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.936940 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.960256 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.983177 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.989152 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.989206 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.989217 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.989237 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:41 crc kubenswrapper[4931]: I0131 04:24:41.989249 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:41Z","lastTransitionTime":"2026-01-31T04:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.019648 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"message\\\":\\\"19615025667110816) with []\\\\nI0131 04:24:36.321478 6416 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0131 04:24:36.321577 6416 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 04:24:36.321655 6416 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 04:24:36.322084 6416 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322213 6416 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 04:24:36.322259 6416 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:36.322264 6416 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:36.322306 6416 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:36.322363 6416 factory.go:656] Stopping watch factory\\\\nI0131 04:24:36.322377 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:36.322411 6416 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:36.322417 6416 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:24:36.322424 6416 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322435 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 04:24:36.322501 6416 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.040949 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.059698 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.086044 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.095601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.095658 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.095820 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.095849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.095870 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:42Z","lastTransitionTime":"2026-01-31T04:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.103416 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.123133 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.149671 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.165968 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.181359 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.197317 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.198806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.198864 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.198882 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.198939 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.198955 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:42Z","lastTransitionTime":"2026-01-31T04:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.214064 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.235114 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.263220 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.303660 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.303759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.303778 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.303806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.303822 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:42Z","lastTransitionTime":"2026-01-31T04:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.409428 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.409481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.409494 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.409515 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.409530 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:42Z","lastTransitionTime":"2026-01-31T04:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.513351 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.513391 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.513402 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.513419 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.513430 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:42Z","lastTransitionTime":"2026-01-31T04:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.617114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.617179 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.617197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.617227 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.617245 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:42Z","lastTransitionTime":"2026-01-31T04:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.720699 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.720796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.720815 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.720845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.720865 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:42Z","lastTransitionTime":"2026-01-31T04:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.824314 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.824369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.824384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.824412 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.824436 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:42Z","lastTransitionTime":"2026-01-31T04:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.879320 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:12:42.721752069 +0000 UTC Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.896478 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.896524 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.896524 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:42 crc kubenswrapper[4931]: E0131 04:24:42.896714 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:42 crc kubenswrapper[4931]: E0131 04:24:42.896903 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:42 crc kubenswrapper[4931]: E0131 04:24:42.897072 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.927939 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.928014 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.928033 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.928064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:42 crc kubenswrapper[4931]: I0131 04:24:42.928087 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:42Z","lastTransitionTime":"2026-01-31T04:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.032362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.032443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.032461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.032492 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.032512 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.136244 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.136286 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.136295 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.136309 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.136319 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.239487 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.239554 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.239571 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.239599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.239619 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.343103 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.343157 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.343166 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.343184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.343196 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.446085 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.446123 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.446133 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.446147 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.446157 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.548688 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.548790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.548810 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.548837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.548855 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.651620 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.651664 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.651674 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.651690 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.651700 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.754961 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.754989 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.755023 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.755042 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.755070 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.755095 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: E0131 04:24:43.755255 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:43 crc kubenswrapper[4931]: E0131 04:24:43.755350 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs podName:df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342 nodeName:}" failed. No retries permitted until 2026-01-31 04:24:51.75532262 +0000 UTC m=+50.564551534 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs") pod "network-metrics-daemon-4cc6z" (UID: "df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.858045 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.858120 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.858137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.858165 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.858188 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.880348 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:40:08.990248141 +0000 UTC Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.896983 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:43 crc kubenswrapper[4931]: E0131 04:24:43.897317 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.961464 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.961516 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.961534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.961560 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:43 crc kubenswrapper[4931]: I0131 04:24:43.961582 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:43Z","lastTransitionTime":"2026-01-31T04:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.066216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.066812 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.067023 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.067240 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.067414 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.170792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.171084 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.171155 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.171226 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.171286 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.274604 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.274677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.274697 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.274760 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.274783 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.379322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.379395 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.379418 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.379452 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.379476 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.482557 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.482596 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.482608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.482626 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.482636 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.587341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.587426 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.587445 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.587476 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.587494 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.690394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.690459 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.690478 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.690505 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.690524 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.793549 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.793603 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.793616 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.793641 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.793654 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.881477 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:38:48.235388256 +0000 UTC Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.895980 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.895989 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.896252 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:44 crc kubenswrapper[4931]: E0131 04:24:44.896311 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.896346 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.896392 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.896410 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.896433 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.896450 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:44 crc kubenswrapper[4931]: E0131 04:24:44.896489 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:44 crc kubenswrapper[4931]: E0131 04:24:44.896609 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.999808 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.999877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.999900 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.999928 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:44 crc kubenswrapper[4931]: I0131 04:24:44.999951 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:44Z","lastTransitionTime":"2026-01-31T04:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.103443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.103916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.104128 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.104318 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.104497 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:45Z","lastTransitionTime":"2026-01-31T04:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.207559 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.207619 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.207632 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.207656 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.207671 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:45Z","lastTransitionTime":"2026-01-31T04:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.310011 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.310317 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.310384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.310553 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.310620 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:45Z","lastTransitionTime":"2026-01-31T04:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.414592 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.414636 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.414647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.414666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.414676 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:45Z","lastTransitionTime":"2026-01-31T04:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.517815 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.517852 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.517864 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.517881 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.517894 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:45Z","lastTransitionTime":"2026-01-31T04:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.620375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.620661 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.620753 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.620825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.620885 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:45Z","lastTransitionTime":"2026-01-31T04:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.724367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.724424 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.724445 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.724469 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.724486 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:45Z","lastTransitionTime":"2026-01-31T04:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.827714 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.828049 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.828152 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.828240 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.828349 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:45Z","lastTransitionTime":"2026-01-31T04:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.882480 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:19:26.089276024 +0000 UTC Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.896123 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:45 crc kubenswrapper[4931]: E0131 04:24:45.896390 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.931899 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.932115 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.932360 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.932558 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:45 crc kubenswrapper[4931]: I0131 04:24:45.932702 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:45Z","lastTransitionTime":"2026-01-31T04:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.043404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.044292 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.044328 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.044366 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.044391 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.147460 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.147499 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.147510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.147526 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.147539 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.250585 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.250625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.250637 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.250654 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.250666 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.353157 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.353538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.353677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.353825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.353920 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.458015 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.458062 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.458074 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.458093 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.458106 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.561639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.561684 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.561698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.561741 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.561755 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.665070 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.665441 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.665531 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.665628 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.665743 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.769952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.770305 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.770852 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.770973 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.771058 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.875443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.875518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.875536 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.875572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.875593 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.882670 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:29:22.522236219 +0000 UTC Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.896890 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.896990 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.896891 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:46 crc kubenswrapper[4931]: E0131 04:24:46.897142 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:46 crc kubenswrapper[4931]: E0131 04:24:46.897306 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:46 crc kubenswrapper[4931]: E0131 04:24:46.897494 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.979874 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.979941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.979963 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.979989 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:46 crc kubenswrapper[4931]: I0131 04:24:46.980006 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:46Z","lastTransitionTime":"2026-01-31T04:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.083077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.083142 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.083157 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.083180 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.083196 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.186590 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.186660 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.186680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.186711 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.186790 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.290313 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.290418 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.290447 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.290485 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.290512 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.392981 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.393022 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.393036 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.393052 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.393069 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.496592 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.496663 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.496687 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.496746 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.496768 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.600411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.600479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.600499 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.600527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.600549 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.698960 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.699435 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.699588 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.699763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.699931 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: E0131 04:24:47.720295 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.725596 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.725803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.725960 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.726109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.726249 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: E0131 04:24:47.745255 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.750817 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.750918 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.750943 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.750979 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.751005 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: E0131 04:24:47.770459 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.778615 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.778787 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.778808 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.778863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.778880 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: E0131 04:24:47.802282 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.806558 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.806592 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.806621 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.806635 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.806678 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: E0131 04:24:47.820437 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:47 crc kubenswrapper[4931]: E0131 04:24:47.820545 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.822580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.822658 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.822670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.822712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.822739 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.883126 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:20:55.889050502 +0000 UTC Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.896654 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:47 crc kubenswrapper[4931]: E0131 04:24:47.896865 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.925518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.925560 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.925568 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.925583 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:47 crc kubenswrapper[4931]: I0131 04:24:47.925595 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:47Z","lastTransitionTime":"2026-01-31T04:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.027545 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.027868 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.027945 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.028014 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.028073 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.131134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.131185 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.131199 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.131218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.131230 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.233305 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.233612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.233677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.233780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.233859 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.336391 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.336430 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.336442 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.336456 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.336467 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.439763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.439806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.439815 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.439832 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.439846 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.542824 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.542867 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.542877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.542896 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.542909 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.645686 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.645795 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.645815 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.645844 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.645863 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.748946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.749005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.749024 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.749048 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.749066 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.851617 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.852003 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.852085 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.852155 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.852213 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.884080 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:47:34.339423156 +0000 UTC Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.896468 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:48 crc kubenswrapper[4931]: E0131 04:24:48.896655 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.896923 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.897078 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:48 crc kubenswrapper[4931]: E0131 04:24:48.897207 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:48 crc kubenswrapper[4931]: E0131 04:24:48.897577 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.897894 4931 scope.go:117] "RemoveContainer" containerID="0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.955669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.956320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.956346 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.956384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:48 crc kubenswrapper[4931]: I0131 04:24:48.956403 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:48Z","lastTransitionTime":"2026-01-31T04:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.060457 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.060531 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.060550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.060606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.060624 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.163382 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.163430 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.163440 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.163457 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.163467 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.231845 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/1.log" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.235431 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.236186 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.253533 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.267193 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.267254 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.267270 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.267296 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.267309 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.268678 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.279698 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.294377 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.308802 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.332022 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"message\\\":\\\"19615025667110816) with []\\\\nI0131 04:24:36.321478 6416 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0131 04:24:36.321577 6416 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 04:24:36.321655 6416 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 04:24:36.322084 6416 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322213 6416 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 04:24:36.322259 6416 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:36.322264 6416 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:36.322306 6416 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:36.322363 6416 factory.go:656] Stopping watch factory\\\\nI0131 04:24:36.322377 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:36.322411 6416 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:36.322417 6416 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:24:36.322424 6416 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322435 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 04:24:36.322501 6416 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.355543 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.367254 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.370734 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.370778 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.370793 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.370813 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.370826 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.382001 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.394779 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.406114 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.419214 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.436896 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.453946 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.469714 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.473286 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.473367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.473384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.473438 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.473458 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.483976 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.514030 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.575432 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.575485 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.575498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.575520 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.575532 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.678115 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.678388 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.678469 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.678546 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.678614 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.768318 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.775486 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.780396 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.780471 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.780489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.780513 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.780531 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.780536 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.792046 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.800487 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.812512 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.823430 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.836572 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.853601 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.871089 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.883290 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.883342 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.883364 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.883392 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.883412 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.884031 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.884227 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:11:02.416863906 +0000 UTC Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.896074 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:49 crc kubenswrapper[4931]: E0131 04:24:49.896240 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.907197 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"message\\\":\\\"19615025667110816) with []\\\\nI0131 04:24:36.321478 6416 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0131 04:24:36.321577 6416 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 04:24:36.321655 6416 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 04:24:36.322084 6416 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322213 6416 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 04:24:36.322259 6416 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:36.322264 6416 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:36.322306 6416 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:36.322363 6416 factory.go:656] Stopping watch factory\\\\nI0131 04:24:36.322377 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:36.322411 6416 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:36.322417 6416 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:24:36.322424 6416 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322435 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 04:24:36.322501 6416 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.926850 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.940259 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.949350 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.961546 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.972206 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.985613 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.985665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.985678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.985699 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.985710 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:49Z","lastTransitionTime":"2026-01-31T04:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:49 crc kubenswrapper[4931]: I0131 04:24:49.991107 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.001437 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.088653 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.088697 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.088712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.088763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.088778 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:50Z","lastTransitionTime":"2026-01-31T04:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.191899 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.191949 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.191963 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.191982 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.191994 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:50Z","lastTransitionTime":"2026-01-31T04:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.241191 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/2.log" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.241846 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/1.log" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.244665 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb" exitCode=1 Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.244737 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb"} Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.244830 4931 scope.go:117] "RemoveContainer" containerID="0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.245548 4931 scope.go:117] "RemoveContainer" containerID="6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb" Jan 31 04:24:50 crc kubenswrapper[4931]: E0131 04:24:50.245707 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.261782 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.271519 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.283490 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.296138 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.296175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.296184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.296200 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.296212 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:50Z","lastTransitionTime":"2026-01-31T04:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.299052 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.323455 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbd302bd31257688924aea800a3966cc723e8bc3e9111a40edf515c1afd45e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"message\\\":\\\"19615025667110816) with []\\\\nI0131 04:24:36.321478 6416 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0131 04:24:36.321577 6416 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 04:24:36.321655 6416 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 04:24:36.322084 6416 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322213 6416 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 04:24:36.322259 6416 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:24:36.322264 6416 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:24:36.322306 6416 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:24:36.322363 6416 factory.go:656] Stopping watch factory\\\\nI0131 04:24:36.322377 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:36.322411 6416 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:24:36.322417 6416 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:24:36.322424 6416 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:24:36.322435 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 04:24:36.322501 6416 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.337946 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.349324 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.363013 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.373235 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.390262 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.399134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.399182 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.399196 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.399216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.399231 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:50Z","lastTransitionTime":"2026-01-31T04:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.404243 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.425868 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.439035 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.452083 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.462120 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.472433 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.487060 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.498928 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.501869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.501913 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.501932 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.501958 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.501975 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:50Z","lastTransitionTime":"2026-01-31T04:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.604869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.604912 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.604924 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.604942 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.604955 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:50Z","lastTransitionTime":"2026-01-31T04:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.706996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.707353 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.707453 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.707552 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.707644 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:50Z","lastTransitionTime":"2026-01-31T04:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.811150 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.811217 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.811234 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.811261 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.811278 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:50Z","lastTransitionTime":"2026-01-31T04:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.885346 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:29:54.260839923 +0000 UTC Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.895787 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.895823 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.895876 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:50 crc kubenswrapper[4931]: E0131 04:24:50.895993 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:50 crc kubenswrapper[4931]: E0131 04:24:50.896133 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:50 crc kubenswrapper[4931]: E0131 04:24:50.896265 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.914551 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.914609 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.914628 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.914656 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:50 crc kubenswrapper[4931]: I0131 04:24:50.914676 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:50Z","lastTransitionTime":"2026-01-31T04:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.018072 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.018131 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.018150 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.018174 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.018196 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.124459 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.124596 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.124627 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.124657 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.124683 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.228504 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.228926 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.229102 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.229388 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.229581 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.249450 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/2.log" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.253378 4931 scope.go:117] "RemoveContainer" containerID="6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb" Jan 31 04:24:51 crc kubenswrapper[4931]: E0131 04:24:51.253533 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.273700 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.288084 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.300796 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.316529 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.338093 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.338143 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.338160 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.338183 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.338203 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.359180 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.386000 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.405120 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.417441 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.432284 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.440971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.441019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.441033 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.441052 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.441064 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.448599 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.467244 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.480822 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.492515 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.506494 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.517788 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.530611 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.544063 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.544070 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.544126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.544144 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.544170 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.544188 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.563023 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.647640 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.647703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.647738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.647764 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.647777 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.750332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.750393 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.750414 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.750440 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.750460 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.853254 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.853313 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.853332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.853358 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.853378 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.855138 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:51 crc kubenswrapper[4931]: E0131 04:24:51.855393 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:51 crc kubenswrapper[4931]: E0131 04:24:51.855519 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs podName:df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342 nodeName:}" failed. No retries permitted until 2026-01-31 04:25:07.855480828 +0000 UTC m=+66.664709742 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs") pod "network-metrics-daemon-4cc6z" (UID: "df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.886590 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:50:24.175923627 +0000 UTC Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.896208 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:51 crc kubenswrapper[4931]: E0131 04:24:51.896370 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.921752 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.943821 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.956483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.956540 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.956558 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.956584 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.956607 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:51Z","lastTransitionTime":"2026-01-31T04:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.968047 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:51 crc kubenswrapper[4931]: I0131 04:24:51.986566 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.017511 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.042250 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.059242 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.059486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.059631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.059798 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.059925 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.071906 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.096453 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.116759 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.131522 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.146638 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.163108 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.163375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.163534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.163667 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.163803 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.165631 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.186401 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.205179 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.222135 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.236149 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.248524 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.263293 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.265977 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.266010 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.266020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.266035 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.266045 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.369252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.369289 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.369299 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.369314 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.369324 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.471504 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.471551 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.471563 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.471580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.471592 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.561522 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.561824 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:25:24.561773968 +0000 UTC m=+83.371002872 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.562060 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.562122 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.562222 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.562254 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.562325 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:25:24.562295852 +0000 UTC m=+83.371524756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.562351 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:25:24.562338833 +0000 UTC m=+83.371567747 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.574495 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.574581 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.574603 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.574629 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.574648 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.664170 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.664261 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.664384 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.664409 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.664421 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.664384 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.664469 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:25:24.664453097 +0000 UTC m=+83.473681971 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.664475 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.664489 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.664517 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:25:24.664508659 +0000 UTC m=+83.473737533 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.677121 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.677183 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.677203 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.677228 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.677246 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.780667 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.780747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.780766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.780792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.780809 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.883353 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.883392 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.883406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.883426 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.883438 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.886964 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:37:36.060717576 +0000 UTC Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.896213 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.896245 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.896263 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.896326 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.896410 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:52 crc kubenswrapper[4931]: E0131 04:24:52.896508 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.987149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.987187 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.987200 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.987218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:52 crc kubenswrapper[4931]: I0131 04:24:52.987229 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:52Z","lastTransitionTime":"2026-01-31T04:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.090410 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.090439 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.090448 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.090467 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.090483 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:53Z","lastTransitionTime":"2026-01-31T04:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.193614 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.193645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.193654 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.193670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.193681 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:53Z","lastTransitionTime":"2026-01-31T04:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.296559 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.296600 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.296613 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.296631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.296645 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:53Z","lastTransitionTime":"2026-01-31T04:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.399811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.399885 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.399902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.399930 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.399948 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:53Z","lastTransitionTime":"2026-01-31T04:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.502600 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.502653 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.502670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.502702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.502732 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:53Z","lastTransitionTime":"2026-01-31T04:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.606099 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.606159 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.606177 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.606205 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.606223 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:53Z","lastTransitionTime":"2026-01-31T04:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.709804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.709872 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.709895 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.709942 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.709968 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:53Z","lastTransitionTime":"2026-01-31T04:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.813017 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.813097 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.813121 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.813153 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.813176 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:53Z","lastTransitionTime":"2026-01-31T04:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.887717 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:50:37.563494207 +0000 UTC Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.896383 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:53 crc kubenswrapper[4931]: E0131 04:24:53.896655 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.917701 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.918074 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.918472 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.918920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:53 crc kubenswrapper[4931]: I0131 04:24:53.918987 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:53Z","lastTransitionTime":"2026-01-31T04:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.021864 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.021907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.021928 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.021951 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.021967 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.125392 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.125481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.125528 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.125550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.125565 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.229571 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.229625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.229639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.229659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.229672 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.333511 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.333578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.333601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.333635 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.333657 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.437681 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.437763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.437780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.437803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.437822 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.541099 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.541182 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.541217 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.541260 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.541286 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.644566 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.644631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.644648 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.644675 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.644694 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.748127 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.748179 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.748192 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.748211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.748225 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.851556 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.851626 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.851643 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.851665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.851680 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.889157 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:08:18.534107953 +0000 UTC Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.896575 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.896659 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:54 crc kubenswrapper[4931]: E0131 04:24:54.896785 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.896868 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:54 crc kubenswrapper[4931]: E0131 04:24:54.897067 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:54 crc kubenswrapper[4931]: E0131 04:24:54.897127 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.955079 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.955140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.955158 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.955183 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:54 crc kubenswrapper[4931]: I0131 04:24:54.955202 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:54Z","lastTransitionTime":"2026-01-31T04:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.058795 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.059480 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.059534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.059572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.059598 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.162074 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.162201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.162214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.162235 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.162247 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.264841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.264901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.264916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.264937 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.264949 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.368020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.368069 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.368083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.368104 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.368116 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.470357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.470408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.470420 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.470439 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.470460 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.573160 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.573392 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.573500 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.573606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.573693 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.676349 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.676405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.676421 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.676441 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.676455 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.778832 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.778868 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.778877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.778891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.778901 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.881872 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.882241 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.882432 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.882636 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.882877 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.890205 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:24:28.116340616 +0000 UTC Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.896546 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:55 crc kubenswrapper[4931]: E0131 04:24:55.896700 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.986328 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.986619 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.986707 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.986891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:55 crc kubenswrapper[4931]: I0131 04:24:55.986987 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:55Z","lastTransitionTime":"2026-01-31T04:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.089970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.090338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.090427 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.090512 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.090711 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:56Z","lastTransitionTime":"2026-01-31T04:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.193869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.194246 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.194501 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.194708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.194926 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:56Z","lastTransitionTime":"2026-01-31T04:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.297109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.297147 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.297156 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.297169 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.297178 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:56Z","lastTransitionTime":"2026-01-31T04:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.399020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.399058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.399067 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.399083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.399092 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:56Z","lastTransitionTime":"2026-01-31T04:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.502545 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.502611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.502625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.502647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.502663 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:56Z","lastTransitionTime":"2026-01-31T04:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.604681 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.604740 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.604756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.604774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.604786 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:56Z","lastTransitionTime":"2026-01-31T04:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.707513 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.707562 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.707577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.707597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.707610 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:56Z","lastTransitionTime":"2026-01-31T04:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.809915 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.809949 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.809959 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.809974 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.809984 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:56Z","lastTransitionTime":"2026-01-31T04:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.890830 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:03:41.28693981 +0000 UTC Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.896192 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:56 crc kubenswrapper[4931]: E0131 04:24:56.896305 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.896477 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:56 crc kubenswrapper[4931]: E0131 04:24:56.896521 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.896616 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:56 crc kubenswrapper[4931]: E0131 04:24:56.896662 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.911908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.911941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.911950 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.911980 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:56 crc kubenswrapper[4931]: I0131 04:24:56.911994 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:56Z","lastTransitionTime":"2026-01-31T04:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.013781 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.013820 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.013830 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.013845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.013855 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.115884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.115939 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.115951 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.115973 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.115987 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.218476 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.218546 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.218565 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.218595 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.218613 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.321431 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.321493 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.321510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.321541 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.321562 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.424128 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.424176 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.424191 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.424211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.424225 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.526658 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.526702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.526715 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.526757 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.526774 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.628695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.628771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.628787 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.628806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.628818 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.732109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.732445 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.732466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.732522 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.732619 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.835147 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.835222 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.835242 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.835271 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.835294 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.891715 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:04:09.95420339 +0000 UTC Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.896297 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:57 crc kubenswrapper[4931]: E0131 04:24:57.896535 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.938306 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.938367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.938384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.938407 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.938423 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.986106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.986167 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.986187 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.986211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:57 crc kubenswrapper[4931]: I0131 04:24:57.986227 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:57Z","lastTransitionTime":"2026-01-31T04:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: E0131 04:24:58.005104 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.009839 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.009898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.009916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.009942 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.009966 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: E0131 04:24:58.030411 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.034927 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.034983 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.034998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.035018 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.035029 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: E0131 04:24:58.049702 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.053408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.053447 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.053466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.053488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.053503 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: E0131 04:24:58.068059 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.072450 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.072485 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.072494 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.072513 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.072523 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: E0131 04:24:58.085231 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:24:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:24:58 crc kubenswrapper[4931]: E0131 04:24:58.085357 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.087198 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.087231 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.087240 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.087255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.087265 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.193145 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.193208 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.193223 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.193243 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.193261 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.296140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.296185 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.296195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.296213 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.296224 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.398460 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.398498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.398510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.398529 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.398542 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.501428 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.501474 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.501489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.501514 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.501529 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.604259 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.604322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.604337 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.604359 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.604374 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.706647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.706702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.706742 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.706767 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.706784 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.808863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.808908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.808922 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.808943 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.808956 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.892259 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:22:51.322461252 +0000 UTC Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.896705 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.897008 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:24:58 crc kubenswrapper[4931]: E0131 04:24:58.897133 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:24:58 crc kubenswrapper[4931]: E0131 04:24:58.897233 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.896767 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:24:58 crc kubenswrapper[4931]: E0131 04:24:58.897392 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.911677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.911764 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.911784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.911809 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:58 crc kubenswrapper[4931]: I0131 04:24:58.911827 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:58Z","lastTransitionTime":"2026-01-31T04:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.014838 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.014896 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.014913 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.014937 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.014954 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.118312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.118680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.119025 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.119201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.119346 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.222387 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.222641 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.222756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.222857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.222929 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.326558 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.326614 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.326631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.326656 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.326673 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.431300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.431381 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.431399 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.431428 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.431447 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.536517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.536583 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.536600 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.536628 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.536646 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.639128 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.639178 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.639191 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.639210 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.639224 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.742638 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.742707 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.742756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.742783 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.742802 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.846232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.846287 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.846307 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.846332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.846354 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.893415 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:19:27.130382985 +0000 UTC Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.896396 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:24:59 crc kubenswrapper[4931]: E0131 04:24:59.896665 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.949071 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.949143 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.949165 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.949193 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:24:59 crc kubenswrapper[4931]: I0131 04:24:59.949213 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:24:59Z","lastTransitionTime":"2026-01-31T04:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.052866 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.052956 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.052976 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.053003 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.053021 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.155626 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.155679 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.155692 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.155712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.155742 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.258840 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.258887 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.258901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.258919 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.258931 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.362574 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.362649 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.362669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.362848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.362870 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.464955 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.465007 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.465019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.465040 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.465051 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.568080 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.568122 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.568133 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.568151 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.568163 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.671409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.671462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.671472 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.671488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.671498 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.773840 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.774177 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.774354 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.774518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.774587 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.877627 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.878025 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.878204 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.878635 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.879050 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.894424 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:40:42.188129856 +0000 UTC Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.896831 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.896841 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.896957 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:00 crc kubenswrapper[4931]: E0131 04:25:00.897154 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:00 crc kubenswrapper[4931]: E0131 04:25:00.897389 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:00 crc kubenswrapper[4931]: E0131 04:25:00.897509 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.982887 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.983301 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.983459 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.983795 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:00 crc kubenswrapper[4931]: I0131 04:25:00.984002 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:00Z","lastTransitionTime":"2026-01-31T04:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.087225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.087305 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.087332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.087368 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.087391 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:01Z","lastTransitionTime":"2026-01-31T04:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.190290 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.190373 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.190415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.190447 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.190471 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:01Z","lastTransitionTime":"2026-01-31T04:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.292892 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.293013 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.293034 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.293061 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.293079 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:01Z","lastTransitionTime":"2026-01-31T04:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.396070 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.396151 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.396207 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.396233 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.396251 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:01Z","lastTransitionTime":"2026-01-31T04:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.499466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.499519 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.499534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.499556 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.499570 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:01Z","lastTransitionTime":"2026-01-31T04:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.602702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.603171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.603341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.603498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.603640 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:01Z","lastTransitionTime":"2026-01-31T04:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.707660 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.707780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.707811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.707843 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.707869 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:01Z","lastTransitionTime":"2026-01-31T04:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.810854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.810900 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.810915 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.810933 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.810947 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:01Z","lastTransitionTime":"2026-01-31T04:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.896337 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:50:47.360421124 +0000 UTC Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.896386 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:01 crc kubenswrapper[4931]: E0131 04:25:01.896638 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.912876 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.912936 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.912953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.912977 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.912996 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:01Z","lastTransitionTime":"2026-01-31T04:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.916751 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.930713 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.942529 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.961673 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:01 crc kubenswrapper[4931]: I0131 04:25:01.977298 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.026356 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.026396 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.026408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.026424 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.026435 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.027276 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.046904 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.061286 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.080335 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.104093 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.117665 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.129506 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.129557 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.129574 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.129600 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.129620 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.131085 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.141292 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.153341 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.166642 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.178167 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.196955 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.211651 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.231861 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.232126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.232275 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.232418 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.232570 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.335885 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.336182 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.336326 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.336482 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.336665 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.440563 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.440622 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.440636 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.440659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.440674 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.543643 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.544066 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.544229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.544376 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.544509 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.647939 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.648020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.648045 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.648257 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.648352 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.751742 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.751813 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.752368 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.752453 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.752804 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.856884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.856947 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.856965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.856990 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.857007 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.895831 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.895959 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.895831 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:02 crc kubenswrapper[4931]: E0131 04:25:02.896026 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:02 crc kubenswrapper[4931]: E0131 04:25:02.896154 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:02 crc kubenswrapper[4931]: E0131 04:25:02.896355 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.896494 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 06:09:38.49493709 +0000 UTC Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.959993 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.960068 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.960090 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.960115 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:02 crc kubenswrapper[4931]: I0131 04:25:02.960132 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:02Z","lastTransitionTime":"2026-01-31T04:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.063212 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.063295 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.063329 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.063358 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.063385 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.167030 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.167091 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.167121 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.167150 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.167173 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.270203 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.270262 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.270279 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.270305 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.270322 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.374007 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.374077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.374103 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.374134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.374156 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.477249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.477323 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.477341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.477367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.477387 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.580715 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.580809 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.580827 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.580853 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.580873 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.684028 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.684094 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.684115 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.684145 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.684164 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.787564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.787603 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.787613 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.787628 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.787637 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.890254 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.890288 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.890299 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.890317 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.890329 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.895825 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:03 crc kubenswrapper[4931]: E0131 04:25:03.895959 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.896857 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:14:14.83482404 +0000 UTC Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.993461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.993516 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.993531 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.993547 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:03 crc kubenswrapper[4931]: I0131 04:25:03.993559 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:03Z","lastTransitionTime":"2026-01-31T04:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.097012 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.097080 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.097097 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.097298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.097315 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:04Z","lastTransitionTime":"2026-01-31T04:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.200223 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.200280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.200294 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.200313 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.200326 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:04Z","lastTransitionTime":"2026-01-31T04:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.303298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.303358 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.303371 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.303392 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.303405 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:04Z","lastTransitionTime":"2026-01-31T04:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.406027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.406099 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.406117 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.406146 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.406168 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:04Z","lastTransitionTime":"2026-01-31T04:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.509455 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.509510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.509527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.509550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.509569 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:04Z","lastTransitionTime":"2026-01-31T04:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.612905 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.612958 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.612974 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.613003 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.613019 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:04Z","lastTransitionTime":"2026-01-31T04:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.716451 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.716509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.716533 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.716562 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.716580 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:04Z","lastTransitionTime":"2026-01-31T04:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.819516 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.819564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.819581 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.819605 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.819622 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:04Z","lastTransitionTime":"2026-01-31T04:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.897472 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:16:52.732142115 +0000 UTC Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.922986 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.923029 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.923046 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.923070 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.923087 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:04Z","lastTransitionTime":"2026-01-31T04:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.969629 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.969752 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:04 crc kubenswrapper[4931]: E0131 04:25:04.969796 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.969809 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:04 crc kubenswrapper[4931]: I0131 04:25:04.969918 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:04 crc kubenswrapper[4931]: E0131 04:25:04.971426 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:04 crc kubenswrapper[4931]: E0131 04:25:04.972099 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:04 crc kubenswrapper[4931]: E0131 04:25:04.972681 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.026139 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.026190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.026199 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.026214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.026222 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.129156 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.129215 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.129232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.129256 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.129275 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.231634 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.231695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.231712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.231779 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.231801 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.336089 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.336158 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.336175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.336202 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.336220 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.439255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.439310 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.439323 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.439342 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.439356 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.542532 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.542593 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.542610 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.542636 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.542655 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.645659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.645712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.645740 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.645758 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.645770 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.749216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.749292 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.749321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.749354 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.749374 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.851710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.851770 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.851784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.851803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.851817 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.899886 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:04:18.146356257 +0000 UTC Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.954090 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.954139 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.954157 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.954178 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:05 crc kubenswrapper[4931]: I0131 04:25:05.954195 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:05Z","lastTransitionTime":"2026-01-31T04:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.058230 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.058285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.058302 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.058344 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.058370 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.161033 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.161087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.161103 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.161127 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.161142 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.263948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.263991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.264002 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.264017 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.264026 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.366882 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.366926 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.366935 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.366950 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.366960 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.469153 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.469223 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.469248 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.469275 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.469296 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.571825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.571882 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.571895 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.571916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.571926 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.674059 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.674104 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.674113 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.674126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.674138 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.776764 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.776810 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.776819 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.776836 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.776846 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.879407 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.879449 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.879459 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.879478 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.879492 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.896056 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.896111 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.896110 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:06 crc kubenswrapper[4931]: E0131 04:25:06.896169 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.896304 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:06 crc kubenswrapper[4931]: E0131 04:25:06.896300 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:06 crc kubenswrapper[4931]: E0131 04:25:06.896672 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:06 crc kubenswrapper[4931]: E0131 04:25:06.896816 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.897227 4931 scope.go:117] "RemoveContainer" containerID="6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb" Jan 31 04:25:06 crc kubenswrapper[4931]: E0131 04:25:06.897523 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.901019 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:15:16.751628735 +0000 UTC Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.982261 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.982296 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.982305 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.982320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:06 crc kubenswrapper[4931]: I0131 04:25:06.982332 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:06Z","lastTransitionTime":"2026-01-31T04:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.084792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.084832 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.084843 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.084860 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.084872 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:07Z","lastTransitionTime":"2026-01-31T04:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.187323 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.187362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.187373 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.187389 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.187399 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:07Z","lastTransitionTime":"2026-01-31T04:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.289934 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.289970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.289978 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.289991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.290002 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:07Z","lastTransitionTime":"2026-01-31T04:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.392430 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.392493 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.392509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.392535 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.392552 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:07Z","lastTransitionTime":"2026-01-31T04:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.495128 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.495191 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.495210 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.495238 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.495260 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:07Z","lastTransitionTime":"2026-01-31T04:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.597669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.597708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.597720 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.597747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.597759 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:07Z","lastTransitionTime":"2026-01-31T04:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.699838 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.699904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.699922 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.699948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.699966 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:07Z","lastTransitionTime":"2026-01-31T04:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.802690 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.802820 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.802842 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.802868 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.802888 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:07Z","lastTransitionTime":"2026-01-31T04:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.901142 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:45:53.688991934 +0000 UTC Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.904868 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.904939 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.904960 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.904988 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.905011 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:07Z","lastTransitionTime":"2026-01-31T04:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:07 crc kubenswrapper[4931]: I0131 04:25:07.924783 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:07 crc kubenswrapper[4931]: E0131 04:25:07.925022 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:25:07 crc kubenswrapper[4931]: E0131 04:25:07.925156 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs podName:df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342 nodeName:}" failed. No retries permitted until 2026-01-31 04:25:39.925123256 +0000 UTC m=+98.734352180 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs") pod "network-metrics-daemon-4cc6z" (UID: "df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.007738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.008032 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.008105 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.008175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.008241 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.110273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.110324 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.110336 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.110356 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.110367 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.212710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.212757 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.212765 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.212778 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.212787 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.263619 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.263645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.263653 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.263666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.263675 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.280263 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:08Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.283880 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.283908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.283916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.283929 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.283938 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.300708 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:08Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.304630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.304678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.304691 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.304709 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.304738 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.321798 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:08Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.325414 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.325448 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.325461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.325480 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.325493 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.343643 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:08Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.354948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.354997 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.355007 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.355022 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.355033 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.368020 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:08Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.368179 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.370061 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.370102 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.370116 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.370134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.370151 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.472788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.472839 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.472852 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.472867 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.472877 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.576043 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.576084 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.576092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.576113 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.576123 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.678941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.679372 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.679383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.679401 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.679415 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.781968 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.782007 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.782020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.782036 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.782047 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.886925 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.886975 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.886993 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.887019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.887037 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.896157 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.896217 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.896237 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.896350 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.896412 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.896454 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.896600 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:08 crc kubenswrapper[4931]: E0131 04:25:08.896766 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.901329 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:08:00.043663333 +0000 UTC Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.990649 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.990795 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.990814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.990840 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:08 crc kubenswrapper[4931]: I0131 04:25:08.990932 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:08Z","lastTransitionTime":"2026-01-31T04:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.093323 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.093381 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.093396 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.093415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.093427 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:09Z","lastTransitionTime":"2026-01-31T04:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.197911 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.197979 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.197997 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.198021 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.198037 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:09Z","lastTransitionTime":"2026-01-31T04:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.300664 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.300708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.300734 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.300751 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.300761 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:09Z","lastTransitionTime":"2026-01-31T04:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.403184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.403228 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.403239 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.403258 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.403271 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:09Z","lastTransitionTime":"2026-01-31T04:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.506037 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.506100 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.506118 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.506143 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.506163 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:09Z","lastTransitionTime":"2026-01-31T04:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.608462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.608498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.608509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.608525 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.608539 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:09Z","lastTransitionTime":"2026-01-31T04:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.711370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.711454 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.711470 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.711498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.711516 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:09Z","lastTransitionTime":"2026-01-31T04:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.814656 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.814747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.814769 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.814798 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.814820 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:09Z","lastTransitionTime":"2026-01-31T04:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.901640 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:04:27.160658124 +0000 UTC Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.917743 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.917801 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.917811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.917831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:09 crc kubenswrapper[4931]: I0131 04:25:09.917841 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:09Z","lastTransitionTime":"2026-01-31T04:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.020914 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.020972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.020984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.021005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.021021 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.126860 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.126938 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.126963 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.126997 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.127022 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.230800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.230863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.230883 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.230919 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.230946 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.312870 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/0.log" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.312918 4931 generic.go:334] "Generic (PLEG): container finished" podID="0be95b57-6df4-4ba6-88e8-acf405e3d6d2" containerID="f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1" exitCode=1 Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.312949 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r5kkh" event={"ID":"0be95b57-6df4-4ba6-88e8-acf405e3d6d2","Type":"ContainerDied","Data":"f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.313284 4931 scope.go:117] "RemoveContainer" containerID="f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.324482 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.333000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.333038 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.333049 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.333065 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.333078 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.338202 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.352614 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.366706 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.377328 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.390831 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.409452 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.431162 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.435754 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.435801 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.435812 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.435829 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.435841 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.448184 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.462006 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.472558 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.487373 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:09Z\\\",\\\"message\\\":\\\"2026-01-31T04:24:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5\\\\n2026-01-31T04:24:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5 to /host/opt/cni/bin/\\\\n2026-01-31T04:24:24Z [verbose] multus-daemon started\\\\n2026-01-31T04:24:24Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:25:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.500214 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.511702 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.531685 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.537530 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.537553 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.537561 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.537576 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.537585 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.546239 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.557957 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.572122 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:10Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.640147 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.640187 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.640198 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.640216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.640227 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.742342 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.742362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.742369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.742380 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.742388 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.844975 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.845111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.845176 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.845244 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.845326 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.896546 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.896555 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.896785 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:10 crc kubenswrapper[4931]: E0131 04:25:10.896803 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.896744 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:10 crc kubenswrapper[4931]: E0131 04:25:10.896941 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:10 crc kubenswrapper[4931]: E0131 04:25:10.897100 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:10 crc kubenswrapper[4931]: E0131 04:25:10.897235 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.902570 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:04:27.085489443 +0000 UTC Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.948277 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.948625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.948645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.948703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:10 crc kubenswrapper[4931]: I0131 04:25:10.948764 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:10Z","lastTransitionTime":"2026-01-31T04:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.052042 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.052408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.052560 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.052709 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.052890 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.156180 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.156243 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.156266 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.156300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.156322 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.259503 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.259939 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.260182 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.260402 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.260582 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.320749 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/0.log" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.320870 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r5kkh" event={"ID":"0be95b57-6df4-4ba6-88e8-acf405e3d6d2","Type":"ContainerStarted","Data":"1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.345568 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.360781 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.363173 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.363298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.363401 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.363483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.363559 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.374167 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.385301 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.402349 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:09Z\\\",\\\"message\\\":\\\"2026-01-31T04:24:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5\\\\n2026-01-31T04:24:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5 to /host/opt/cni/bin/\\\\n2026-01-31T04:24:24Z [verbose] multus-daemon started\\\\n2026-01-31T04:24:24Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:25:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:25:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.416276 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.441840 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.465796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.465843 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.465858 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.465878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.465889 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.476325 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.489820 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.499942 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.511115 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.520274 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.530950 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.541452 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.555358 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.566463 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.568583 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.568645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.568660 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.568680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.568692 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.577252 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.587805 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.670683 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.670746 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.670759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.670776 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.670793 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.774131 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.774176 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.774186 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.774208 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.774218 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.878204 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.878882 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.879163 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.879504 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.879676 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.903745 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:03:01.585801583 +0000 UTC Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.909639 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.927085 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.948338 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.960815 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.981836 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.981867 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.981879 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.981897 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.981909 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:11Z","lastTransitionTime":"2026-01-31T04:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:11 crc kubenswrapper[4931]: I0131 04:25:11.991833 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.013313 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.025670 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.076675 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.084150 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.084319 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.084440 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.084569 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.084703 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:12Z","lastTransitionTime":"2026-01-31T04:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.090695 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.103745 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.120959 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.142639 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.159257 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.170929 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.187925 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.187988 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.188004 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.188024 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.188037 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:12Z","lastTransitionTime":"2026-01-31T04:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.192484 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:09Z\\\",\\\"message\\\":\\\"2026-01-31T04:24:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5\\\\n2026-01-31T04:24:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5 to /host/opt/cni/bin/\\\\n2026-01-31T04:24:24Z [verbose] multus-daemon started\\\\n2026-01-31T04:24:24Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:25:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:25:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.213919 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.247063 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.268864 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.290669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.290745 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.290759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.290782 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.290795 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:12Z","lastTransitionTime":"2026-01-31T04:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.393175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.393221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.393232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.393248 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.393259 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:12Z","lastTransitionTime":"2026-01-31T04:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.495810 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.495861 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.495870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.495886 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.495896 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:12Z","lastTransitionTime":"2026-01-31T04:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.598925 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.598971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.598982 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.599000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.599011 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:12Z","lastTransitionTime":"2026-01-31T04:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.701864 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.701915 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.701927 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.701948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.701960 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:12Z","lastTransitionTime":"2026-01-31T04:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.805112 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.805162 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.805176 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.805196 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.805210 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:12Z","lastTransitionTime":"2026-01-31T04:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.896920 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.896948 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.896966 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:12 crc kubenswrapper[4931]: E0131 04:25:12.897575 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:12 crc kubenswrapper[4931]: E0131 04:25:12.897489 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.896998 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:12 crc kubenswrapper[4931]: E0131 04:25:12.897637 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:12 crc kubenswrapper[4931]: E0131 04:25:12.897802 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.904859 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:21:11.212567149 +0000 UTC Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.907684 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.907807 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.907873 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.907942 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:12 crc kubenswrapper[4931]: I0131 04:25:12.908021 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:12Z","lastTransitionTime":"2026-01-31T04:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.010921 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.011338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.011442 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.011556 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.011659 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.114904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.115283 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.115360 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.115444 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.115520 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.218833 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.219167 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.219254 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.219351 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.219428 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.323158 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.323265 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.323283 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.323312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.323335 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.427134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.427226 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.427250 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.427280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.427303 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.530171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.530225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.530243 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.530274 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.530292 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.637835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.637905 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.637927 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.637958 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.637988 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.742156 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.742218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.742236 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.742300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.742319 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.845161 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.845229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.845238 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.845252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.845285 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.905037 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:37:45.565912696 +0000 UTC Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.948467 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.948525 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.948546 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.948572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:13 crc kubenswrapper[4931]: I0131 04:25:13.948590 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:13Z","lastTransitionTime":"2026-01-31T04:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.051180 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.051214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.051223 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.051237 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.051246 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.153386 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.153415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.153425 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.153438 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.153448 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.256767 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.256811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.256823 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.256838 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.256850 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.360129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.360197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.360211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.360229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.360240 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.463106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.463176 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.463201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.463232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.463254 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.566804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.566915 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.566935 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.566967 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.567062 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.670240 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.670383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.670408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.670440 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.670531 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.774031 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.774091 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.774102 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.774118 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.774130 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.877799 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.877859 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.877874 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.877891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.877905 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.896277 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.896330 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.896363 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:14 crc kubenswrapper[4931]: E0131 04:25:14.896425 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.896360 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:14 crc kubenswrapper[4931]: E0131 04:25:14.896560 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:14 crc kubenswrapper[4931]: E0131 04:25:14.896655 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:14 crc kubenswrapper[4931]: E0131 04:25:14.896831 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.905264 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:10:23.627264105 +0000 UTC Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.980554 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.980618 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.980637 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.980662 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:14 crc kubenswrapper[4931]: I0131 04:25:14.980679 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:14Z","lastTransitionTime":"2026-01-31T04:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.083175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.083222 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.083234 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.083249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.083262 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:15Z","lastTransitionTime":"2026-01-31T04:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.185788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.185871 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.185899 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.185934 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.185961 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:15Z","lastTransitionTime":"2026-01-31T04:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.288352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.288408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.288419 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.288439 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.288452 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:15Z","lastTransitionTime":"2026-01-31T04:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.391403 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.391463 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.391475 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.391498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.391510 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:15Z","lastTransitionTime":"2026-01-31T04:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.493798 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.493847 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.493861 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.493878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.493890 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:15Z","lastTransitionTime":"2026-01-31T04:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.596642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.596702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.596713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.596752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.596764 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:15Z","lastTransitionTime":"2026-01-31T04:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.699575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.699650 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.699670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.699697 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.699718 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:15Z","lastTransitionTime":"2026-01-31T04:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.802843 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.802895 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.802910 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.802944 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.802957 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:15Z","lastTransitionTime":"2026-01-31T04:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.904761 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.904796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.904807 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.904819 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.904829 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:15Z","lastTransitionTime":"2026-01-31T04:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:15 crc kubenswrapper[4931]: I0131 04:25:15.906041 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:43:35.018892643 +0000 UTC Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.006702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.006756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.006766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.006783 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.006794 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.109107 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.109147 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.109156 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.109175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.109187 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.211399 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.211455 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.211468 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.211487 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.211502 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.315580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.315630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.315666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.315683 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.315693 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.420481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.420529 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.420539 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.420559 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.420572 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.523752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.523806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.523826 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.523855 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.523874 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.626496 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.626531 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.626542 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.626559 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.626571 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.729602 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.729636 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.729649 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.729665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.729676 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.831813 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.831847 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.831859 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.831875 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.831886 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.896220 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.896315 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.896454 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:16 crc kubenswrapper[4931]: E0131 04:25:16.896608 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:16 crc kubenswrapper[4931]: E0131 04:25:16.896977 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:16 crc kubenswrapper[4931]: E0131 04:25:16.897077 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.897222 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:16 crc kubenswrapper[4931]: E0131 04:25:16.897356 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.906207 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:30:34.456363528 +0000 UTC Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.933937 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.934309 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.934340 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.934367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:16 crc kubenswrapper[4931]: I0131 04:25:16.934386 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:16Z","lastTransitionTime":"2026-01-31T04:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.037778 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.037822 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.037834 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.037850 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.037862 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.140906 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.140967 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.140984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.141010 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.141031 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.244122 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.244218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.244240 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.244273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.244297 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.346683 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.346766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.346783 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.346807 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.346824 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.450292 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.450376 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.450409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.450439 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.450463 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.553954 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.554006 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.554019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.554036 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.554050 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.656527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.656570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.656578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.656593 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.656604 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.759609 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.759680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.759703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.759765 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.759827 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.862842 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.862886 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.862898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.862916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.862929 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.906580 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:43:49.155548199 +0000 UTC Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.966284 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.966336 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.966347 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.966367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:17 crc kubenswrapper[4931]: I0131 04:25:17.966382 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:17Z","lastTransitionTime":"2026-01-31T04:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.069361 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.069411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.069427 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.069448 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.069464 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.172406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.172450 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.172462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.172479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.172491 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.274747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.274792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.274804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.274821 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.274833 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.377153 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.377198 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.377212 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.377232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.377249 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.480669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.480746 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.480759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.480780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.480793 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.583857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.583948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.583976 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.584006 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.584028 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.619992 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.620060 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.620080 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.620110 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.620144 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.643894 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:18Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.649443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.649507 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.649530 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.649575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.649605 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.668472 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:18Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.674354 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.674402 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.674422 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.674444 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.674461 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.700592 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:18Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.705715 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.705784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.705796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.705814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.705828 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.723545 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:18Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.729438 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.729497 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.729517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.729544 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.729563 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.749767 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:18Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.750002 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.752387 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.752453 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.752471 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.752497 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.752515 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.855867 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.855934 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.855951 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.855979 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.855998 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.896538 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.896589 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.896600 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.896775 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.896844 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.897064 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.897117 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:18 crc kubenswrapper[4931]: E0131 04:25:18.897191 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.906868 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:24:48.257095328 +0000 UTC Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.958907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.958953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.958971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.958996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:18 crc kubenswrapper[4931]: I0131 04:25:18.959018 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:18Z","lastTransitionTime":"2026-01-31T04:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.062083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.062137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.062153 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.062272 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.062296 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.165636 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.165698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.165756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.165784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.165802 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.268645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.268701 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.268755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.268790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.268811 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.371766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.371842 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.371863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.371891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.371912 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.475170 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.475231 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.475248 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.475274 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.475294 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.579004 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.579094 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.579126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.579159 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.579182 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.682114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.682190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.682226 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.682309 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.682337 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.786055 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.786117 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.786129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.786149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.786160 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.888117 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.888169 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.888181 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.888202 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.888216 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.907580 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:09:07.287948651 +0000 UTC Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.991388 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.991448 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.991465 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.991489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:19 crc kubenswrapper[4931]: I0131 04:25:19.991508 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:19Z","lastTransitionTime":"2026-01-31T04:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.094357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.094402 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.094414 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.094431 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.094444 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:20Z","lastTransitionTime":"2026-01-31T04:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.197149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.197195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.197211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.197232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.197250 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:20Z","lastTransitionTime":"2026-01-31T04:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.301597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.301663 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.301684 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.301713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.301780 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:20Z","lastTransitionTime":"2026-01-31T04:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.404051 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.404107 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.404124 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.404148 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.404165 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:20Z","lastTransitionTime":"2026-01-31T04:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.506940 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.507021 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.507045 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.507078 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.507101 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:20Z","lastTransitionTime":"2026-01-31T04:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.610218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.610285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.610302 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.610326 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.610344 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:20Z","lastTransitionTime":"2026-01-31T04:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.713169 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.713411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.713426 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.713444 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.713465 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:20Z","lastTransitionTime":"2026-01-31T04:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.815803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.815847 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.815860 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.815879 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.815893 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:20Z","lastTransitionTime":"2026-01-31T04:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.897002 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.897058 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.897112 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:20 crc kubenswrapper[4931]: E0131 04:25:20.897214 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.897531 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:20 crc kubenswrapper[4931]: E0131 04:25:20.897601 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:20 crc kubenswrapper[4931]: E0131 04:25:20.897681 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:20 crc kubenswrapper[4931]: E0131 04:25:20.897734 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.899308 4931 scope.go:117] "RemoveContainer" containerID="6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.908502 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:57:38.117596463 +0000 UTC Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.918902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.918953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.918970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.918995 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:20 crc kubenswrapper[4931]: I0131 04:25:20.919012 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:20Z","lastTransitionTime":"2026-01-31T04:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.022645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.023413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.023424 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.023443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.023454 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.126491 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.126555 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.126572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.126598 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.126617 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.229396 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.229471 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.229497 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.229530 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.229587 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.333201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.333263 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.333282 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.333309 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.333327 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.437575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.437647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.437669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.437702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.437762 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.541438 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.541508 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.541527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.541555 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.541579 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.645343 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.645393 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.645404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.645420 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.645434 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.748239 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.748295 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.748309 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.748331 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.748351 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.850849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.850926 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.850948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.850984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.851008 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.909118 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:27:08.4488157 +0000 UTC Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.924144 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.941514 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.954201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.954243 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.954257 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.954277 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.954290 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:21Z","lastTransitionTime":"2026-01-31T04:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.954341 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.964193 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.979621 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:09Z\\\",\\\"message\\\":\\\"2026-01-31T04:24:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5\\\\n2026-01-31T04:24:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5 to /host/opt/cni/bin/\\\\n2026-01-31T04:24:24Z [verbose] multus-daemon started\\\\n2026-01-31T04:24:24Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:25:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:25:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:21 crc kubenswrapper[4931]: I0131 04:25:21.993779 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.011393 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.057180 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.057232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.057249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.057274 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.057293 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.058511 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.075207 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.091817 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.115465 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.130120 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.142596 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.156020 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.161301 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.161360 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.161374 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.161446 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.161466 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.168480 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.178686 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.189922 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.204366 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.264252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.264338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.264362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.264382 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.264395 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.363061 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/2.log" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.366091 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.366542 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.366774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.366839 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.366857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.366884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.366903 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.383105 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.398554 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.411828 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.422743 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.433870 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.445523 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.459796 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.469509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.469564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.469582 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.469606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.469623 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.481758 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.495209 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.506962 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.517415 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.530687 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:09Z\\\",\\\"message\\\":\\\"2026-01-31T04:24:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5\\\\n2026-01-31T04:24:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5 to /host/opt/cni/bin/\\\\n2026-01-31T04:24:24Z [verbose] multus-daemon started\\\\n2026-01-31T04:24:24Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:25:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:25:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.551629 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.564557 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.573695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.573737 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.573745 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.573762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.573774 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.595810 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.613965 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.632029 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.648676 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.676714 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.676771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.676780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.676797 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.676809 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.779556 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.779610 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.779629 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.779653 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.779669 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.882476 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.882533 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.882549 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.882570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.882586 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.896069 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.896102 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.896102 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.896170 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:22 crc kubenswrapper[4931]: E0131 04:25:22.896307 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:22 crc kubenswrapper[4931]: E0131 04:25:22.896422 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:22 crc kubenswrapper[4931]: E0131 04:25:22.896676 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:22 crc kubenswrapper[4931]: E0131 04:25:22.896743 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.909428 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:29:41.907929187 +0000 UTC Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.985705 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.985793 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.985811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.985871 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:22 crc kubenswrapper[4931]: I0131 04:25:22.986107 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:22Z","lastTransitionTime":"2026-01-31T04:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.089988 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.090090 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.090129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.090161 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.090184 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:23Z","lastTransitionTime":"2026-01-31T04:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.193415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.193468 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.193483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.193501 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.193514 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:23Z","lastTransitionTime":"2026-01-31T04:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.296896 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.296952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.296970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.296993 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.297012 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:23Z","lastTransitionTime":"2026-01-31T04:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.376652 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/3.log" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.377372 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/2.log" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.380608 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" exitCode=1 Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.380683 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.380790 4931 scope.go:117] "RemoveContainer" containerID="6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.381960 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:25:23 crc kubenswrapper[4931]: E0131 04:25:23.382224 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.396335 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.400217 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.400265 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.400277 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.400297 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.400310 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:23Z","lastTransitionTime":"2026-01-31T04:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.414348 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.427878 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.444466 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.457699 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.469265 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.482656 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.503050 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.503110 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.503127 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.503150 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.503167 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:23Z","lastTransitionTime":"2026-01-31T04:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.505685 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275320a2862a044dad2ab7491fa841df8c92e30bdf08ec6851692d27924d0fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:24:49.918556 6582 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:24:49.918847 6582 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:24:49.918816 6582 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:24:49.918948 6582 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:22Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:25:22.358628 6974 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:25:22.358685 6974 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:25:22.358765 6974 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.541887 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.559644 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.572220 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.587518 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:09Z\\\",\\\"message\\\":\\\"2026-01-31T04:24:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5\\\\n2026-01-31T04:24:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5 to /host/opt/cni/bin/\\\\n2026-01-31T04:24:24Z [verbose] multus-daemon started\\\\n2026-01-31T04:24:24Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:25:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:25:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.603167 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.605042 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.605081 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.605092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.605111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.605125 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:23Z","lastTransitionTime":"2026-01-31T04:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.616668 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.634918 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.648442 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.659256 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.672044 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:23Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.707004 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.707064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.707077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.707096 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.707108 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:23Z","lastTransitionTime":"2026-01-31T04:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.810020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.810095 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.810118 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.810151 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.810174 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:23Z","lastTransitionTime":"2026-01-31T04:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.910224 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:05:54.191388316 +0000 UTC Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.912936 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.912972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.912982 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.912997 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:23 crc kubenswrapper[4931]: I0131 04:25:23.913009 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:23Z","lastTransitionTime":"2026-01-31T04:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.016350 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.016431 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.016449 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.016476 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.016496 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.119788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.119857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.119876 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.119905 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.119927 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.223325 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.223403 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.223422 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.223449 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.223468 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.327026 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.327094 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.327117 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.327145 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.327161 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.386567 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/3.log" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.394027 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.394486 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.418091 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.432304 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.432340 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.432357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.432380 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.432398 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.455644 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.477233 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.497877 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.519252 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.535395 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.536541 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.536780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.537011 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.537189 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.537668 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.558342 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.582008 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.603285 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.625810 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.629437 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.629404065 +0000 UTC m=+147.438632969 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.629262 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.629659 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.629820 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.629894 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.629879607 +0000 UTC m=+147.439108521 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.630406 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.630704 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.630885 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.630840723 +0000 UTC m=+147.440069687 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.640239 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.640291 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.640308 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.640332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.640352 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.643205 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.670359 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.700518 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:22Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:25:22.358628 6974 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:25:22.358685 6974 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:25:22.358765 6974 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:25:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.720646 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.731480 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.731572 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.731758 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.731810 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.731809 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.731837 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.731852 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.731873 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.731936 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.731906285 +0000 UTC m=+147.541135199 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.731968 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.731953986 +0000 UTC m=+147.541182900 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.741031 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.743309 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.743374 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.743397 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.743417 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.743429 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.757464 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.778508 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:09Z\\\",\\\"message\\\":\\\"2026-01-31T04:24:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5\\\\n2026-01-31T04:24:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5 to /host/opt/cni/bin/\\\\n2026-01-31T04:24:24Z [verbose] multus-daemon started\\\\n2026-01-31T04:24:24Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:25:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:25:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.801859 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.847187 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.847275 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.847293 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.847315 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.847332 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.895955 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.896008 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.895991 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.896033 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.896125 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.896283 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.896334 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:24 crc kubenswrapper[4931]: E0131 04:25:24.896390 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.910577 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:16:48.886163931 +0000 UTC Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.913632 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.950414 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.950452 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.950464 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.950482 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:24 crc kubenswrapper[4931]: I0131 04:25:24.950494 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:24Z","lastTransitionTime":"2026-01-31T04:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.053641 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.053942 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.053972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.054038 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.054057 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.156744 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.156804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.156818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.156841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.156858 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.259118 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.259178 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.259194 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.259218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.259236 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.362000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.362065 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.362083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.362112 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.362134 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.464710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.464804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.464823 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.464847 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.464865 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.568229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.568286 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.568299 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.568319 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.568334 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.671527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.671599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.671679 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.671705 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.671882 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.774792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.774868 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.774898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.774929 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.774952 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.878523 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.878591 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.878627 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.878660 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.878682 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.911975 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:02:07.945313659 +0000 UTC Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.982625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.982701 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.982759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.982790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:25 crc kubenswrapper[4931]: I0131 04:25:25.982814 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:25Z","lastTransitionTime":"2026-01-31T04:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.086423 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.086516 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.086542 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.086576 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.086605 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:26Z","lastTransitionTime":"2026-01-31T04:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.190368 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.190437 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.190455 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.190484 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.190507 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:26Z","lastTransitionTime":"2026-01-31T04:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.293895 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.293965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.293987 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.294012 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.294035 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:26Z","lastTransitionTime":"2026-01-31T04:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.397620 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.397681 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.397700 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.397766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.397787 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:26Z","lastTransitionTime":"2026-01-31T04:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.501688 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.502012 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.502021 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.502038 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.502055 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:26Z","lastTransitionTime":"2026-01-31T04:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.605024 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.605080 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.605097 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.605124 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.605145 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:26Z","lastTransitionTime":"2026-01-31T04:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.707847 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.707884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.707896 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.707910 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.707919 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:26Z","lastTransitionTime":"2026-01-31T04:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.810863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.810942 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.810966 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.810995 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.811013 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:26Z","lastTransitionTime":"2026-01-31T04:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.896392 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.896439 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.896418 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:26 crc kubenswrapper[4931]: E0131 04:25:26.896593 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:26 crc kubenswrapper[4931]: E0131 04:25:26.896766 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.896825 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:26 crc kubenswrapper[4931]: E0131 04:25:26.896885 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:26 crc kubenswrapper[4931]: E0131 04:25:26.897027 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.912300 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:17:48.485562672 +0000 UTC Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.914364 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.914413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.914433 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.914455 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:26 crc kubenswrapper[4931]: I0131 04:25:26.914474 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:26Z","lastTransitionTime":"2026-01-31T04:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.017908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.017969 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.017985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.018013 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.018030 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.161056 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.161123 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.161146 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.161178 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.161205 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.264253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.264341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.264819 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.264863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.264881 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.368490 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.368531 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.368543 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.368562 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.368574 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.470780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.470860 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.470874 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.470896 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.471420 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.574208 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.574255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.574264 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.574279 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.574288 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.676757 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.676819 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.676842 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.676876 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.676898 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.780356 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.780452 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.780477 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.780509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.780528 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.883617 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.883678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.883695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.883745 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.883764 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.913157 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:26:23.838926412 +0000 UTC Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.986171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.986211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.986219 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.986232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:27 crc kubenswrapper[4931]: I0131 04:25:27.986242 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:27Z","lastTransitionTime":"2026-01-31T04:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.088612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.088647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.088655 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.088669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.088677 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:28Z","lastTransitionTime":"2026-01-31T04:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.192125 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.192182 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.192194 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.192211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.192223 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:28Z","lastTransitionTime":"2026-01-31T04:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.295033 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.295063 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.295071 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.295086 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.295095 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:28Z","lastTransitionTime":"2026-01-31T04:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.398629 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.398661 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.398670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.398685 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.398694 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:28Z","lastTransitionTime":"2026-01-31T04:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.501052 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.501171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.501189 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.501212 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.501232 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:28Z","lastTransitionTime":"2026-01-31T04:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.604413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.604492 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.604511 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.604535 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.604552 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:28Z","lastTransitionTime":"2026-01-31T04:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.707674 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.707744 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.707757 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.707776 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.707787 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:28Z","lastTransitionTime":"2026-01-31T04:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.811225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.811288 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.811310 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.811343 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.811367 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:28Z","lastTransitionTime":"2026-01-31T04:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.896075 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.896134 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.896223 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:28 crc kubenswrapper[4931]: E0131 04:25:28.896460 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.896511 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:28 crc kubenswrapper[4931]: E0131 04:25:28.896648 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:28 crc kubenswrapper[4931]: E0131 04:25:28.896903 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:28 crc kubenswrapper[4931]: E0131 04:25:28.897084 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.913414 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 15:45:13.555468507 +0000 UTC Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.913847 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.913895 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.913906 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.913926 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:28 crc kubenswrapper[4931]: I0131 04:25:28.913938 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:28Z","lastTransitionTime":"2026-01-31T04:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.017575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.017632 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.017646 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.017670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.017686 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.120516 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.120584 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.120603 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.120633 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.120653 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.122530 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.122581 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.122599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.122618 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.122634 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: E0131 04:25:29.138328 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.143518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.143566 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.143585 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.143606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.143623 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: E0131 04:25:29.158472 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.163991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.164058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.164079 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.164104 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.164124 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: E0131 04:25:29.181500 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.185917 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.185993 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.186013 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.186046 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.186069 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: E0131 04:25:29.202117 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.207197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.207262 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.207282 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.207310 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.207335 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: E0131 04:25:29.227333 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:29Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:29 crc kubenswrapper[4931]: E0131 04:25:29.227605 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.229899 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.229985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.230008 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.230041 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.230064 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.333850 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.333921 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.333938 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.333965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.333983 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.437075 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.437107 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.437120 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.437137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.437149 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.540221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.540268 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.540284 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.540307 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.540323 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.643489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.643570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.643591 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.643615 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.643637 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.746147 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.746204 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.746222 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.746252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.746272 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.849825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.849884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.849903 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.849932 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.849950 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.914080 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:26:36.041859638 +0000 UTC Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.952133 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.952203 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.952218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.952241 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:29 crc kubenswrapper[4931]: I0131 04:25:29.952254 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:29Z","lastTransitionTime":"2026-01-31T04:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.055934 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.055997 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.056016 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.056037 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.056055 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.160062 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.160137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.160165 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.160195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.160220 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.264381 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.264444 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.264458 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.264479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.264495 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.367812 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.367870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.367884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.367904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.367916 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.470618 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.470675 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.470684 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.470700 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.470714 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.573247 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.573301 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.573315 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.573336 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.573353 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.675784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.675863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.675881 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.675906 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.675925 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.779242 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.779290 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.779304 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.779322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.779336 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.882854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.882924 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.882955 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.882998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.883024 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.896618 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.896657 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.896687 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.896617 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:30 crc kubenswrapper[4931]: E0131 04:25:30.896853 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:30 crc kubenswrapper[4931]: E0131 04:25:30.896947 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:30 crc kubenswrapper[4931]: E0131 04:25:30.897092 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:30 crc kubenswrapper[4931]: E0131 04:25:30.897329 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.914793 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:02:45.323670314 +0000 UTC Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.986298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.986354 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.986374 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.986398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:30 crc kubenswrapper[4931]: I0131 04:25:30.986415 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:30Z","lastTransitionTime":"2026-01-31T04:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.089434 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.089496 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.089519 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.089553 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.089581 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:31Z","lastTransitionTime":"2026-01-31T04:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.193285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.193430 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.193499 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.193530 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.193548 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:31Z","lastTransitionTime":"2026-01-31T04:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.296769 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.296841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.296854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.296873 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.296884 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:31Z","lastTransitionTime":"2026-01-31T04:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.399665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.399713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.399749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.399773 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.399790 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:31Z","lastTransitionTime":"2026-01-31T04:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.502688 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.502765 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.502784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.502805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.502817 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:31Z","lastTransitionTime":"2026-01-31T04:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.605022 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.605092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.605104 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.605121 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.605136 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:31Z","lastTransitionTime":"2026-01-31T04:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.707274 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.707335 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.707351 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.707375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.707393 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:31Z","lastTransitionTime":"2026-01-31T04:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.810813 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.810876 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.810894 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.810924 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.810944 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:31Z","lastTransitionTime":"2026-01-31T04:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.914129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.914197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.914218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.914255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.914279 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:31Z","lastTransitionTime":"2026-01-31T04:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.915058 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:41:50.632121074 +0000 UTC Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.920455 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.940948 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.955432 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.976884 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:09Z\\\",\\\"message\\\":\\\"2026-01-31T04:24:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5\\\\n2026-01-31T04:24:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5 to /host/opt/cni/bin/\\\\n2026-01-31T04:24:24Z [verbose] multus-daemon started\\\\n2026-01-31T04:24:24Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:25:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:25:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:31 crc kubenswrapper[4931]: I0131 04:25:31.998185 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.019190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.019244 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.019261 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.019287 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.019306 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.033433 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:22Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:25:22.358628 6974 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:25:22.358685 6974 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:25:22.358765 6974 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:25:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.065434 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.076571 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1405be0-3341-4e8d-b042-8ad942973923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970aaff4cf619b86c9fc878350e984b2671d6ae9a5cd42f2a0e54d6b291183c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0544d5c0e24ffca37b0da8653c6c188d545e025d20d1190ff7f43fc572f773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0544d5c0e24ffca37b0da8653c6c188d545e025d20d1190ff7f43fc572f773\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.096564 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.115921 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.122217 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.122270 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.122290 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.122314 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.122333 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.131196 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.145659 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.210586 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.225328 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.225370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.225384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.225403 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.225416 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.232479 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.252479 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.271176 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.291341 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.310126 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.329361 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.329420 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.329439 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.329465 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.329483 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.334037 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.431756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.431835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.431856 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.431893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.431914 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.534400 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.534701 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.534752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.534771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.534783 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.637773 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.637839 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.637862 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.637891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.637913 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.741401 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.741475 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.741499 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.741527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.741549 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.844988 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.845064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.845088 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.845114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.845130 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.896716 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.896831 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.897180 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.897411 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:32 crc kubenswrapper[4931]: E0131 04:25:32.897408 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:32 crc kubenswrapper[4931]: E0131 04:25:32.897515 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:32 crc kubenswrapper[4931]: E0131 04:25:32.897621 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:32 crc kubenswrapper[4931]: E0131 04:25:32.897706 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.915914 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:26:40.687515075 +0000 UTC Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.948923 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.949170 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.949395 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.949605 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:32 crc kubenswrapper[4931]: I0131 04:25:32.949831 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:32Z","lastTransitionTime":"2026-01-31T04:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.053751 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.053805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.053817 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.053836 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.053848 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.157074 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.157163 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.157187 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.157217 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.157239 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.260617 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.260688 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.260714 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.260799 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.260825 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.364556 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.365051 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.365425 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.365606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.365838 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.468985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.469059 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.469084 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.469107 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.469123 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.571953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.572821 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.572835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.572849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.572858 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.676773 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.676825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.676843 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.676866 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.676885 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.780010 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.780111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.780127 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.780154 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.780172 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.887490 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.887677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.887713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.887800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.887819 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.916385 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:57:51.133077854 +0000 UTC Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.991799 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.992063 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.992188 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.992276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:33 crc kubenswrapper[4931]: I0131 04:25:33.992354 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:33Z","lastTransitionTime":"2026-01-31T04:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.095336 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.095395 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.095411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.095434 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.095449 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:34Z","lastTransitionTime":"2026-01-31T04:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.198943 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.198996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.199012 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.199034 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.199051 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:34Z","lastTransitionTime":"2026-01-31T04:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.302901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.302984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.303009 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.303042 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.303064 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:34Z","lastTransitionTime":"2026-01-31T04:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.407341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.407398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.407411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.407435 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.407451 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:34Z","lastTransitionTime":"2026-01-31T04:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.511232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.511293 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.511314 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.511343 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.511362 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:34Z","lastTransitionTime":"2026-01-31T04:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.614496 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.614550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.614568 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.614593 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.614623 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:34Z","lastTransitionTime":"2026-01-31T04:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.717919 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.717995 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.718017 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.718040 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.718056 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:34Z","lastTransitionTime":"2026-01-31T04:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.821713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.821913 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.821935 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.821960 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.821978 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:34Z","lastTransitionTime":"2026-01-31T04:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.896088 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.896132 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.896178 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.896363 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:34 crc kubenswrapper[4931]: E0131 04:25:34.896535 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:34 crc kubenswrapper[4931]: E0131 04:25:34.896646 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:34 crc kubenswrapper[4931]: E0131 04:25:34.896787 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:34 crc kubenswrapper[4931]: E0131 04:25:34.897497 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.898160 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:25:34 crc kubenswrapper[4931]: E0131 04:25:34.898460 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.916747 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:52:16.138894922 +0000 UTC Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.925423 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.925478 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.925495 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.925522 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:34 crc kubenswrapper[4931]: I0131 04:25:34.925544 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:34Z","lastTransitionTime":"2026-01-31T04:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.028627 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.028682 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.028699 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.028758 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.028784 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.131844 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.131889 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.131901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.131919 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.131933 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.235577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.235631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.235644 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.235662 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.235674 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.339652 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.339713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.339782 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.339824 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.339847 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.442367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.442439 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.442459 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.442486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.442505 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.545804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.545874 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.545892 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.545923 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.545941 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.649762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.649838 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.649858 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.649884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.649903 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.752844 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.752893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.752904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.752924 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.752936 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.855662 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.856205 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.856357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.856498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.856630 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.917887 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:11:05.032907344 +0000 UTC Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.959633 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.959691 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.959713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.959772 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:35 crc kubenswrapper[4931]: I0131 04:25:35.959795 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:35Z","lastTransitionTime":"2026-01-31T04:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.063153 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.063201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.063219 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.063242 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.063464 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.166373 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.166430 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.166442 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.166458 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.166469 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.269566 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.269612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.269623 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.269640 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.269651 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.372790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.372864 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.372882 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.372906 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.372929 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.476639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.476751 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.476778 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.476810 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.476839 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.580290 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.580359 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.580378 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.580405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.580422 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.683931 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.684082 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.684107 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.684139 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.684163 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.786930 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.786984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.787001 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.787024 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.787041 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.890317 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.890376 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.890399 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.890430 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.890451 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.895955 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.896047 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.895967 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.896153 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:36 crc kubenswrapper[4931]: E0131 04:25:36.896141 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:36 crc kubenswrapper[4931]: E0131 04:25:36.896426 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:36 crc kubenswrapper[4931]: E0131 04:25:36.896509 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:36 crc kubenswrapper[4931]: E0131 04:25:36.896614 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.919058 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:37:21.476558014 +0000 UTC Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.993578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.993702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.993801 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.993896 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:36 crc kubenswrapper[4931]: I0131 04:25:36.993923 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:36Z","lastTransitionTime":"2026-01-31T04:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.097475 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.097538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.097555 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.097579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.098460 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:37Z","lastTransitionTime":"2026-01-31T04:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.201137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.201207 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.201230 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.201260 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.201284 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:37Z","lastTransitionTime":"2026-01-31T04:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.303899 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.303961 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.303979 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.304003 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.304022 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:37Z","lastTransitionTime":"2026-01-31T04:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.407929 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.407987 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.408005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.408031 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.408054 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:37Z","lastTransitionTime":"2026-01-31T04:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.511672 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.511781 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.511806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.511835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.511856 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:37Z","lastTransitionTime":"2026-01-31T04:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.615603 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.615651 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.615668 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.615690 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.615708 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:37Z","lastTransitionTime":"2026-01-31T04:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.721749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.721814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.721837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.721862 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.721880 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:37Z","lastTransitionTime":"2026-01-31T04:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.824572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.824642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.824659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.824687 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.824707 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:37Z","lastTransitionTime":"2026-01-31T04:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.919628 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 01:40:22.698509002 +0000 UTC Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.928383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.928475 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.928502 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.928537 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:37 crc kubenswrapper[4931]: I0131 04:25:37.928561 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:37Z","lastTransitionTime":"2026-01-31T04:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.031515 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.031595 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.031616 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.031643 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.031660 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.136476 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.136548 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.136570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.136600 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.136622 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.239890 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.239944 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.239956 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.239975 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.239987 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.342106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.342150 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.342160 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.342181 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.342192 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.444908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.444967 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.444984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.445009 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.445030 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.548038 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.548114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.548147 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.548176 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.548199 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.651767 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.651849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.651871 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.651905 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.651928 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.754615 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.754662 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.754671 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.754688 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.754700 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.857054 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.857094 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.857105 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.857122 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.857133 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.895713 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.895803 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.895804 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.895814 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:38 crc kubenswrapper[4931]: E0131 04:25:38.895879 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:38 crc kubenswrapper[4931]: E0131 04:25:38.895993 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:38 crc kubenswrapper[4931]: E0131 04:25:38.896097 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:38 crc kubenswrapper[4931]: E0131 04:25:38.896154 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.920745 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:44:46.616533894 +0000 UTC Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.959792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.959836 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.959845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.959861 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:38 crc kubenswrapper[4931]: I0131 04:25:38.959869 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:38Z","lastTransitionTime":"2026-01-31T04:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.063004 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.063041 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.063051 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.063068 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.063080 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.166107 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.166195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.166218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.166247 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.166269 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.269224 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.269261 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.269270 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.269285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.269295 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.371893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.371954 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.371972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.371998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.372017 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.474564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.475178 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.475344 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.475604 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.475880 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.504555 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.504588 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.504599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.504612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.504622 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: E0131 04:25:39.533646 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.539273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.539313 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.539324 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.539339 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.539350 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: E0131 04:25:39.557453 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.561664 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.561736 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.561747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.561766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.561778 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: E0131 04:25:39.580936 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.594959 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.595016 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.595027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.595065 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.595078 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: E0131 04:25:39.614828 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.620371 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.620448 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.620467 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.620496 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.620516 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: E0131 04:25:39.642297 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:39 crc kubenswrapper[4931]: E0131 04:25:39.642505 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.644172 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.644216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.644232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.644249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.644261 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.747371 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.747414 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.747425 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.747442 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.747453 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.850240 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.850298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.850315 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.850340 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.850357 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.921064 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:07:12.799123442 +0000 UTC Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.953404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.953481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.953504 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.953535 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.953557 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:39Z","lastTransitionTime":"2026-01-31T04:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:39 crc kubenswrapper[4931]: I0131 04:25:39.995568 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:39 crc kubenswrapper[4931]: E0131 04:25:39.995768 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:25:39 crc kubenswrapper[4931]: E0131 04:25:39.995875 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs podName:df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342 nodeName:}" failed. No retries permitted until 2026-01-31 04:26:43.995848169 +0000 UTC m=+162.805077073 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs") pod "network-metrics-daemon-4cc6z" (UID: "df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.057140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.057190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.057202 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.057219 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.057232 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.160231 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.160282 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.160296 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.160317 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.160329 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.263041 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.263116 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.263135 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.263161 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.263179 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.366189 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.366265 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.366337 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.366367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.366388 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.468370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.468417 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.468434 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.468455 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.468471 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.572211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.572269 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.572287 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.572312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.572334 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.675291 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.675345 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.675356 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.675373 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.675384 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.778263 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.778328 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.778345 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.778369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.778386 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.881670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.881715 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.881738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.881752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.881764 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.896329 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.896433 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:40 crc kubenswrapper[4931]: E0131 04:25:40.896500 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.896328 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.896433 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:40 crc kubenswrapper[4931]: E0131 04:25:40.896653 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:40 crc kubenswrapper[4931]: E0131 04:25:40.896881 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:40 crc kubenswrapper[4931]: E0131 04:25:40.897037 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.921711 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:49:39.771150803 +0000 UTC Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.984357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.984399 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.984413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.984432 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:40 crc kubenswrapper[4931]: I0131 04:25:40.984447 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:40Z","lastTransitionTime":"2026-01-31T04:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.087658 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.087743 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.087761 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.087785 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.087802 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:41Z","lastTransitionTime":"2026-01-31T04:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.190573 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.190629 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.190649 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.190673 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.190692 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:41Z","lastTransitionTime":"2026-01-31T04:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.293136 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.293242 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.293267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.293298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.293321 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:41Z","lastTransitionTime":"2026-01-31T04:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.396494 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.396533 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.396543 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.396557 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.396565 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:41Z","lastTransitionTime":"2026-01-31T04:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.499475 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.499537 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.499554 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.499583 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.499601 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:41Z","lastTransitionTime":"2026-01-31T04:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.602528 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.602593 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.602612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.602638 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.602656 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:41Z","lastTransitionTime":"2026-01-31T04:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.706359 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.706410 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.706421 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.706439 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.706452 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:41Z","lastTransitionTime":"2026-01-31T04:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.809092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.809192 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.809219 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.809252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.809279 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:41Z","lastTransitionTime":"2026-01-31T04:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.912207 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.912245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.912256 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.912274 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.912285 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:41Z","lastTransitionTime":"2026-01-31T04:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.913114 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9trw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4cc6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.922481 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:42:21.990148305 +0000 UTC Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.936194 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.955653 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.974099 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:41 crc kubenswrapper[4931]: I0131 04:25:41.990963 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7d60e8b-e113-470f-93ff-a8a795074642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03476c06d802eb093a7f0f6563c58526efd035d8552724e0d0a1277a8a227afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nqpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.006675 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p6fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe320e12-71d8-45f5-8634-ee326cbdb4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7879d657238a7b18229968d8dad6501fae9d0283c09e9b254336b653f66f7924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2z8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p6fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.015253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.015394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.015413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.015433 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.015448 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.029702 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638aa0b1-4b39-4fe0-b136-0a9b2a53edde\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 04:24:20.841645 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 04:24:20.842644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:24:20.844373 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1307732340/tls.crt::/tmp/serving-cert-1307732340/tls.key\\\\\\\"\\\\nI0131 04:24:21.227854 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 04:24:21.230459 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 04:24:21.230474 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 04:24:21.230494 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 04:24:21.230498 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 04:24:21.240218 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 04:24:21.240253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240259 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 04:24:21.240263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 04:24:21.240267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 04:24:21.240270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 04:24:21.240273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 04:24:21.240304 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 04:24:21.247056 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.062764 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f2e5660-13d8-4896-bad5-008e165ba847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:22Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:25:22.358628 6974 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 04:25:22.358685 6974 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:25:22.358765 6974 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:25:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8bpr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78mxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.080527 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc47cf130b10f6c5e8e7199003caceeaf757a60f25d28566c60533e87e2cbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c920245e440f006b4d50e4d7589451617e622bb989a5e44f5a30534c73ba3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.098926 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ea9d774961a34d92b2603f0ad7651c7bb0e2dddc7abfd00b109a6b0e860f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.115715 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-79gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e593e10-e9d9-4d9c-82e3-4d51ce5e2f39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64b18a48f82bd505dc2831d109902dfc4297b4e981104e6451567c019899185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5dc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-79gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.117689 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.117812 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.117837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.117867 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.117888 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.137571 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r5kkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be95b57-6df4-4ba6-88e8-acf405e3d6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:25:09Z\\\",\\\"message\\\":\\\"2026-01-31T04:24:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5\\\\n2026-01-31T04:24:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_968549fe-5abb-41b1-8997-c1971acb18e5 to /host/opt/cni/bin/\\\\n2026-01-31T04:24:24Z [verbose] multus-daemon started\\\\n2026-01-31T04:24:24Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:25:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:25:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r5kkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.160304 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52fq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3448d78c-9a3a-4729-b656-3f3dad829af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d09148c82440ea350107f828312ad2237d59e5bc984a2a264285ad37711b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11dedd7c0c2126f9502464ebde45422b2350767a355b4d20a95e9192e0a2a603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e320bb002d24a10f8e5be412c9f6867f332ab5c2adaf26c65d6cf1d5ff985c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160220340b241ffc9005f36a4c139a0dcd82cbd627dcbf12f81621679e6c1a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22fb052cb5f4e35e0ed402e456028548729977d8266152eab941ea81b693f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6595b6b032fc7369dadb1b54b2608a38f912e8435190b986768e294bc3bc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ff52292b947a4d97cd8e17047b7d8be8e467bf873a9b6e4ecff2b87feb933ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sz9hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52fq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.177306 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4804e45c-fc53-4c00-973a-fbc401ea2990\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a0935a8b910c7eaea523a6db2a5fff415f903f633fe846c78585ea6f48ae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907715368f5ec76863ffe2dc892d183b0920acf788490f24ec5649b0bcf935d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9m5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dkx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.208691 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0cabbdd-9da8-40b6-bc15-cb331f31a12a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e18f1ee073b76c1c0b296819b942a39a00c46500f5aae9f40654ad3a1d377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d03d53454df9466c0989a0ee1541d22cd210a87b783f2a75e4f54fa5a12c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e8d3f1cb85b0c117428d36ea25fef523916ebcc5f09e852c099405b815a16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad707e9b94ede1da1e7385945ae947366e290051c3ec64147bd6f0b67669d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4680be5cf85140cd4c7813354c81d47ea1521862915d40df6021e396ed86def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://758ddb96c83fbe8412da1900bb6682a69c518803c2f32855e163cfe943fe955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f2e8324f0b343161ea335ba858730f70c318531796c6d607bb1d140a1a112\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f07a97d3489de8b0ff85d0f1906c6916ee48983927f4db7a3b30c453c788ba40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.220671 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.220739 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.220755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.220780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.220797 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.226666 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1405be0-3341-4e8d-b042-8ad942973923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970aaff4cf619b86c9fc878350e984b2671d6ae9a5cd42f2a0e54d6b291183c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0544d5c0e24ffca37b0da8653c6c188d545e025d20d1190ff7f43fc572f773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0544d5c0e24ffca37b0da8653c6c188d545e025d20d1190ff7f43fc572f773\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.246673 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb54477-771a-4834-af39-a025d7348935\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3805eaf1687a1e14edc4ec1ea9930a3d70578455ced357ca054bb12f9352a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2306eaf9aef601090411360e97eb903ab649ffbbba6217554e18cab7e3ff062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75477e415bc976de2a165df4a89fa95273776e4015445c07d3fc1d7cce285d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.365661 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.365775 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.365786 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.365902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.365924 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.374709 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44dc50bb-7579-49a7-8148-bcb06d49bf28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2fe974e33612e6bc32dc67b3b9adcecf9e1a0ab0a15c9e5e5b3703fddcbcded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af632ddd0875b64a76cff4b96b82644288e3e8223a8b094146b7d6e60164cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb49b7a16d7b12447f694bf4dbb7af2da8edb344fdfdffe048bd8ec9129535c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4aefd1e4fce07b152f398fc74f14f99f43b3f94b39fa2e5a42ddc7bd6757ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:24:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:24:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:24:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.386340 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:24:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b774308884d435cc3b372164bd52501972375427218a855457f2ffda4e5e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:24:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.467586 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.467617 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.467626 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.467638 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.467648 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.570160 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.570232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.570246 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.570261 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.570271 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.672359 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.672408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.672417 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.672430 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.672441 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.775585 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.775660 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.775682 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.775710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.775772 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.878343 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.878406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.878423 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.878450 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.878472 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.895909 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.896019 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.895952 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.895958 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:42 crc kubenswrapper[4931]: E0131 04:25:42.896228 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:42 crc kubenswrapper[4931]: E0131 04:25:42.896770 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:42 crc kubenswrapper[4931]: E0131 04:25:42.897026 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:42 crc kubenswrapper[4931]: E0131 04:25:42.897071 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.922835 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:03:50.888632887 +0000 UTC Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.981371 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.981424 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.981440 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.981462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:42 crc kubenswrapper[4931]: I0131 04:25:42.981481 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:42Z","lastTransitionTime":"2026-01-31T04:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.084170 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.084240 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.084266 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.084293 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.084313 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:43Z","lastTransitionTime":"2026-01-31T04:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.189424 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.189485 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.189505 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.189550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.189569 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:43Z","lastTransitionTime":"2026-01-31T04:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.292448 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.292490 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.292499 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.292513 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.292522 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:43Z","lastTransitionTime":"2026-01-31T04:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.395285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.395354 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.395370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.395402 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.395421 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:43Z","lastTransitionTime":"2026-01-31T04:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.498964 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.499043 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.499063 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.499091 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.499111 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:43Z","lastTransitionTime":"2026-01-31T04:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.602478 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.602530 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.602543 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.602561 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.602570 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:43Z","lastTransitionTime":"2026-01-31T04:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.706006 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.706060 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.706076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.706097 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.706112 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:43Z","lastTransitionTime":"2026-01-31T04:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.809707 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.809830 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.809848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.809877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.809900 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:43Z","lastTransitionTime":"2026-01-31T04:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.912764 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.912800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.912810 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.912824 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.912833 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:43Z","lastTransitionTime":"2026-01-31T04:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:43 crc kubenswrapper[4931]: I0131 04:25:43.923153 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:59:03.852782484 +0000 UTC Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.015581 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.015664 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.015686 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.015718 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.015778 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.118581 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.118632 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.118645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.118665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.118680 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.220861 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.220934 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.220957 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.220990 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.221018 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.323113 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.323159 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.323171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.323188 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.323201 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.425909 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.425973 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.425989 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.426012 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.426028 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.528265 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.528318 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.528337 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.528362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.528379 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.631839 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.631907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.631924 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.631948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.631967 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.741506 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.741884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.741906 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.741936 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.742862 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.846160 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.846213 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.846230 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.846253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.846274 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.896042 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.896100 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.896100 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:44 crc kubenswrapper[4931]: E0131 04:25:44.896256 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.896340 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:44 crc kubenswrapper[4931]: E0131 04:25:44.896500 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:44 crc kubenswrapper[4931]: E0131 04:25:44.896553 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:44 crc kubenswrapper[4931]: E0131 04:25:44.896681 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.923832 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:29:44.302362334 +0000 UTC Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.949717 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.949818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.949845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.949873 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:44 crc kubenswrapper[4931]: I0131 04:25:44.949895 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:44Z","lastTransitionTime":"2026-01-31T04:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.052965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.053416 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.053433 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.053457 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.053474 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.156832 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.156911 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.156930 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.156954 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.156973 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.259703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.259801 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.259824 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.259852 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.259875 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.362830 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.362907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.362931 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.362958 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.362980 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.466154 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.466214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.466231 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.466253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.466274 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.570500 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.570570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.570593 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.570617 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.570635 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.673647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.673709 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.673771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.673795 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.673813 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.776964 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.777032 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.777050 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.777077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.777096 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.880625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.880716 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.880781 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.880814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.880840 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.924790 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:14:18.87141766 +0000 UTC Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.984382 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.984435 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.984453 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.984479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:45 crc kubenswrapper[4931]: I0131 04:25:45.984503 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:45Z","lastTransitionTime":"2026-01-31T04:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.087851 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.087912 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.087933 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.087960 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.087983 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:46Z","lastTransitionTime":"2026-01-31T04:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.191909 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.191979 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.191996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.192019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.192037 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:46Z","lastTransitionTime":"2026-01-31T04:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.294796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.294841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.294858 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.294879 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.294897 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:46Z","lastTransitionTime":"2026-01-31T04:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.397457 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.397498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.397509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.397525 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.397536 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:46Z","lastTransitionTime":"2026-01-31T04:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.500179 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.500228 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.500242 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.500261 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.500277 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:46Z","lastTransitionTime":"2026-01-31T04:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.602451 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.602480 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.602488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.602500 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.602510 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:46Z","lastTransitionTime":"2026-01-31T04:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.705002 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.705052 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.705071 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.705094 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.705113 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:46Z","lastTransitionTime":"2026-01-31T04:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.808099 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.808171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.808184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.808201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.808213 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:46Z","lastTransitionTime":"2026-01-31T04:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.896612 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.896671 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.896742 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.896637 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:46 crc kubenswrapper[4931]: E0131 04:25:46.896843 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:46 crc kubenswrapper[4931]: E0131 04:25:46.896941 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:46 crc kubenswrapper[4931]: E0131 04:25:46.897029 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:46 crc kubenswrapper[4931]: E0131 04:25:46.897153 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.911650 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.911703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.911755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.911781 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.911800 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:46Z","lastTransitionTime":"2026-01-31T04:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:46 crc kubenswrapper[4931]: I0131 04:25:46.925920 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:58:18.177282414 +0000 UTC Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.014710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.014787 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.014805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.014826 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.014846 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.120050 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.120154 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.120225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.120258 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.120281 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.223075 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.223152 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.223177 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.223208 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.223234 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.325524 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.325574 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.325588 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.325608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.325624 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.428253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.428308 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.428322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.428338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.428350 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.530845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.530891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.530902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.530918 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.530929 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.633645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.633675 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.633685 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.633697 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.633706 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.737549 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.737590 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.737601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.737622 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.737632 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.840633 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.840712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.840769 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.840802 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.840825 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.926098 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:07:59.980642688 +0000 UTC Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.943493 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.943539 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.943550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.943570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:47 crc kubenswrapper[4931]: I0131 04:25:47.943585 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:47Z","lastTransitionTime":"2026-01-31T04:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.046978 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.047048 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.047129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.047153 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.047169 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.150131 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.150171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.150182 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.150197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.150208 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.252814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.252879 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.252893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.252916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.252929 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.355654 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.355695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.355703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.355750 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.355761 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.458775 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.458845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.458862 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.458886 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.458905 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.561566 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.561608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.561616 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.561634 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.561644 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.664239 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.664282 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.664294 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.664309 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.664322 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.766207 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.766257 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.766378 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.766404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.766421 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.868823 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.868893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.868902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.868915 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.868923 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.896217 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.896303 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:48 crc kubenswrapper[4931]: E0131 04:25:48.896355 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:48 crc kubenswrapper[4931]: E0131 04:25:48.896425 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.896304 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:48 crc kubenswrapper[4931]: E0131 04:25:48.896761 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.897067 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:48 crc kubenswrapper[4931]: E0131 04:25:48.897268 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.926784 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:15:02.821145834 +0000 UTC Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.971114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.971165 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.971181 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.971206 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:48 crc kubenswrapper[4931]: I0131 04:25:48.971224 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:48Z","lastTransitionTime":"2026-01-31T04:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.073440 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.073505 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.073527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.073558 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.073577 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.175387 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.175434 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.175445 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.175463 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.175478 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.277568 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.277631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.277653 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.277677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.277693 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.379870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.379914 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.379926 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.379940 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.379949 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.482260 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.482305 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.482318 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.482335 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.482345 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.585199 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.585288 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.585316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.585353 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.585376 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.688574 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.688626 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.688642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.688666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.688684 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.695916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.695989 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.696008 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.696034 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.696052 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: E0131 04:25:49.718473 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.723004 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.723061 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.723077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.723098 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.723116 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: E0131 04:25:49.742106 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.746870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.746935 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.746957 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.746985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.747006 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: E0131 04:25:49.768512 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.774145 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.774249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.774267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.774293 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.774317 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: E0131 04:25:49.794752 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.800251 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.800365 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.800390 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.800419 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.800441 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: E0131 04:25:49.825647 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:25:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d62aa0b2-fc7e-4980-9739-9ae59578d075\\\",\\\"systemUUID\\\":\\\"e984073f-fa07-4ec7-ab9e-f3b72b6e8f33\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:25:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:25:49 crc kubenswrapper[4931]: E0131 04:25:49.825920 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.828383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.828447 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.828463 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.828489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.828506 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.897658 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:25:49 crc kubenswrapper[4931]: E0131 04:25:49.898084 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78mxr_openshift-ovn-kubernetes(5f2e5660-13d8-4896-bad5-008e165ba847)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.926993 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 21:56:22.875991619 +0000 UTC Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.931433 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.931503 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.931526 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.931556 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:49 crc kubenswrapper[4931]: I0131 04:25:49.931573 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:49Z","lastTransitionTime":"2026-01-31T04:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.035340 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.035404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.035420 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.035443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.035459 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.137698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.137797 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.137821 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.137848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.137870 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.240424 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.240515 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.240608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.240634 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.240651 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.343110 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.343168 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.343188 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.343212 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.343228 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.446354 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.446459 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.446477 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.446500 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.446515 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.549812 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.549844 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.549854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.549870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.549880 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.653313 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.653358 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.653370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.653386 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.653397 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.756848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.756904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.756926 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.757062 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.757156 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.860052 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.860105 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.860121 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.860143 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.860163 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.895754 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.895858 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.895869 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.896008 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:50 crc kubenswrapper[4931]: E0131 04:25:50.895998 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:50 crc kubenswrapper[4931]: E0131 04:25:50.896139 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:50 crc kubenswrapper[4931]: E0131 04:25:50.896297 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:50 crc kubenswrapper[4931]: E0131 04:25:50.896433 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.928130 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:11:35.656391893 +0000 UTC Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.962974 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.963053 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.963081 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.963106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:50 crc kubenswrapper[4931]: I0131 04:25:50.963123 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:50Z","lastTransitionTime":"2026-01-31T04:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.067002 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.067070 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.067087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.067114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.067135 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:51Z","lastTransitionTime":"2026-01-31T04:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.170562 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.170866 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.170890 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.170921 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.170944 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:51Z","lastTransitionTime":"2026-01-31T04:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.274416 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.274482 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.274498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.274524 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.274541 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:51Z","lastTransitionTime":"2026-01-31T04:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.377483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.377538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.377557 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.377580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.377600 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:51Z","lastTransitionTime":"2026-01-31T04:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.480941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.481010 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.481027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.481058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.481078 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:51Z","lastTransitionTime":"2026-01-31T04:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.584560 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.584624 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.584645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.584675 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.584699 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:51Z","lastTransitionTime":"2026-01-31T04:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.687772 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.687831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.687851 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.687877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.687896 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:51Z","lastTransitionTime":"2026-01-31T04:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.791064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.791197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.791219 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.791243 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.791261 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:51Z","lastTransitionTime":"2026-01-31T04:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.895442 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.895543 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.895572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.895610 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.895642 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:51Z","lastTransitionTime":"2026-01-31T04:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.928340 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:14:37.942299154 +0000 UTC Jan 31 04:25:51 crc kubenswrapper[4931]: I0131 04:25:51.944927 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.944886404 podStartE2EDuration="1m29.944886404s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:51.942495671 +0000 UTC m=+110.751724595" watchObservedRunningTime="2026-01-31 04:25:51.944886404 +0000 UTC m=+110.754115318" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.000501 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.000563 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.000580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.000601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.000621 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.011977 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-79gv8" podStartSLOduration=92.011938237 podStartE2EDuration="1m32.011938237s" podCreationTimestamp="2026-01-31 04:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:51.986770592 +0000 UTC m=+110.795999496" watchObservedRunningTime="2026-01-31 04:25:52.011938237 +0000 UTC m=+110.821167141" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.051025 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-r5kkh" podStartSLOduration=90.05099777 podStartE2EDuration="1m30.05099777s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:52.011853235 +0000 UTC m=+110.821082159" watchObservedRunningTime="2026-01-31 04:25:52.05099777 +0000 UTC m=+110.860226684" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.054956 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-52fq9" podStartSLOduration=90.054923934 podStartE2EDuration="1m30.054923934s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:52.051152474 +0000 UTC m=+110.860381378" watchObservedRunningTime="2026-01-31 04:25:52.054923934 +0000 UTC m=+110.864153078" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.111068 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.111341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.111436 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.111549 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.111635 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.139968 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.139943282 podStartE2EDuration="28.139943282s" podCreationTimestamp="2026-01-31 04:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:52.116895753 +0000 UTC m=+110.926124647" watchObservedRunningTime="2026-01-31 04:25:52.139943282 +0000 UTC m=+110.949172186" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.140581 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.140573939 podStartE2EDuration="1m25.140573939s" podCreationTimestamp="2026-01-31 04:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:52.139553242 +0000 UTC m=+110.948782126" watchObservedRunningTime="2026-01-31 04:25:52.140573939 +0000 UTC m=+110.949802833" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.156523 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.15650229 podStartE2EDuration="1m3.15650229s" podCreationTimestamp="2026-01-31 04:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:52.155883184 +0000 UTC m=+110.965112068" watchObservedRunningTime="2026-01-31 04:25:52.15650229 +0000 UTC m=+110.965731174" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.184632 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dkx7" podStartSLOduration=90.184609003 podStartE2EDuration="1m30.184609003s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:52.184415878 +0000 UTC m=+110.993644762" watchObservedRunningTime="2026-01-31 04:25:52.184609003 +0000 UTC m=+110.993837897" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.213846 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.214165 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.214320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.214511 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.214581 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.231108 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=90.231086482 podStartE2EDuration="1m30.231086482s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:52.212232174 +0000 UTC m=+111.021461058" watchObservedRunningTime="2026-01-31 04:25:52.231086482 +0000 UTC m=+111.040315366" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.261049 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podStartSLOduration=90.261031114 podStartE2EDuration="1m30.261031114s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:52.259629297 +0000 UTC m=+111.068858181" watchObservedRunningTime="2026-01-31 04:25:52.261031114 +0000 UTC m=+111.070259998" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.282546 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8p6fj" podStartSLOduration=91.282524382 podStartE2EDuration="1m31.282524382s" podCreationTimestamp="2026-01-31 04:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:25:52.272390124 +0000 UTC m=+111.081619008" watchObservedRunningTime="2026-01-31 04:25:52.282524382 +0000 UTC m=+111.091753276" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.316903 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.316955 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.316983 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.317000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.317012 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.419307 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.419338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.419346 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.419358 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.419366 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.521515 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.521601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.521619 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.521652 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.521674 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.625029 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.625520 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.625601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.625690 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.625803 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.728289 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.728488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.728576 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.728667 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.728774 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.832562 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.832616 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.832632 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.832656 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.832674 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.895806 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.895913 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.895970 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:52 crc kubenswrapper[4931]: E0131 04:25:52.895976 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:52 crc kubenswrapper[4931]: E0131 04:25:52.896083 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.895923 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:52 crc kubenswrapper[4931]: E0131 04:25:52.896219 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:52 crc kubenswrapper[4931]: E0131 04:25:52.896362 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.929389 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:22:13.659156858 +0000 UTC Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.935595 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.935639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.935647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.935663 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:52 crc kubenswrapper[4931]: I0131 04:25:52.935672 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:52Z","lastTransitionTime":"2026-01-31T04:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.039886 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.039932 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.039950 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.039974 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.039991 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.143502 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.143550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.143566 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.143589 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.143607 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.247377 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.247457 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.247484 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.247515 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.247537 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.350665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.350771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.350790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.350822 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.350845 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.453635 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.453674 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.453682 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.453694 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.453704 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.556000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.556045 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.556057 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.556071 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.556079 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.658630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.658752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.658769 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.658789 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.658804 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.762020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.762106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.762134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.762168 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.762195 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.865461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.865496 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.865504 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.865517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.865525 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.930012 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:12:33.169491358 +0000 UTC Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.968828 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.968887 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.968897 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.968917 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:53 crc kubenswrapper[4931]: I0131 04:25:53.968930 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:53Z","lastTransitionTime":"2026-01-31T04:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.072443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.072484 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.072494 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.072508 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.072518 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.175229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.175280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.175324 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.175349 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.175369 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.278878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.278938 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.278956 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.278980 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.279006 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.382119 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.382276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.382304 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.382335 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.382362 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.485169 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.485250 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.485270 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.485301 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.485324 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.587967 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.588052 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.588071 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.588098 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.588117 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.691137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.691175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.691188 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.691205 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.691216 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.793762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.793840 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.793856 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.793881 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.793898 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.895711 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.895866 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.895931 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.896028 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.895998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: E0131 04:25:54.895927 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:54 crc kubenswrapper[4931]: E0131 04:25:54.896164 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.896138 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: E0131 04:25:54.896219 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:54 crc kubenswrapper[4931]: E0131 04:25:54.896513 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.896525 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.896579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.896606 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.930262 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:41:25.376747744 +0000 UTC Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.999477 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.999517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.999528 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.999545 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:54 crc kubenswrapper[4931]: I0131 04:25:54.999559 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:54Z","lastTransitionTime":"2026-01-31T04:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.103119 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.103185 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.103203 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.103233 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.103265 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:55Z","lastTransitionTime":"2026-01-31T04:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.206648 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.206709 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.206742 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.206767 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.206783 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:55Z","lastTransitionTime":"2026-01-31T04:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.309834 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.309933 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.309956 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.309989 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.310039 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:55Z","lastTransitionTime":"2026-01-31T04:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.412677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.412946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.413029 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.413093 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.413157 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:55Z","lastTransitionTime":"2026-01-31T04:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.515511 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.516342 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.516503 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.516678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.516838 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:55Z","lastTransitionTime":"2026-01-31T04:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.620552 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.620664 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.620681 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.620710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.620759 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:55Z","lastTransitionTime":"2026-01-31T04:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.723303 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.723934 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.723965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.723988 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.723999 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:55Z","lastTransitionTime":"2026-01-31T04:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.826582 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.826650 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.826667 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.826691 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.826710 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:55Z","lastTransitionTime":"2026-01-31T04:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.930585 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:05:27.270638086 +0000 UTC Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.930736 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.930774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.930782 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.930800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:55 crc kubenswrapper[4931]: I0131 04:25:55.930811 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:55Z","lastTransitionTime":"2026-01-31T04:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.033622 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.033696 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.033710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.033769 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.033782 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.136638 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.136671 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.136679 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.136693 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.136701 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.238480 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.238518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.238528 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.238541 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.238550 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.341643 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.341708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.341731 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.341749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.341758 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.444601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.444643 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.444653 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.444668 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.444679 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.512891 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/1.log" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.513868 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/0.log" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.513980 4931 generic.go:334] "Generic (PLEG): container finished" podID="0be95b57-6df4-4ba6-88e8-acf405e3d6d2" containerID="1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1" exitCode=1 Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.514051 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r5kkh" event={"ID":"0be95b57-6df4-4ba6-88e8-acf405e3d6d2","Type":"ContainerDied","Data":"1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.514130 4931 scope.go:117] "RemoveContainer" containerID="f7b36c4c9ab91e7e53659d9c437391552af9664b90f8949a0fad8c9026370ac1" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.514588 4931 scope.go:117] "RemoveContainer" containerID="1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1" Jan 31 04:25:56 crc kubenswrapper[4931]: E0131 04:25:56.514797 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-r5kkh_openshift-multus(0be95b57-6df4-4ba6-88e8-acf405e3d6d2)\"" pod="openshift-multus/multus-r5kkh" podUID="0be95b57-6df4-4ba6-88e8-acf405e3d6d2" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.550871 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.550926 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.550946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.550972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.550993 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.654414 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.654462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.654472 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.654486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.654499 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.757771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.757810 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.757818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.757832 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.757842 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.860781 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.860837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.860854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.860879 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.860900 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.896522 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.896521 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:56 crc kubenswrapper[4931]: E0131 04:25:56.896745 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.896560 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:56 crc kubenswrapper[4931]: E0131 04:25:56.896881 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.896516 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:56 crc kubenswrapper[4931]: E0131 04:25:56.897022 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:56 crc kubenswrapper[4931]: E0131 04:25:56.897207 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.930811 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:38:13.299616071 +0000 UTC Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.964161 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.964229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.964249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.964278 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:56 crc kubenswrapper[4931]: I0131 04:25:56.964300 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:56Z","lastTransitionTime":"2026-01-31T04:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.066614 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.066701 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.066749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.066784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.066811 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:57Z","lastTransitionTime":"2026-01-31T04:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.170121 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.170184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.170205 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.170239 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.170265 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:57Z","lastTransitionTime":"2026-01-31T04:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.273880 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.273991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.274021 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.274053 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.274069 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:57Z","lastTransitionTime":"2026-01-31T04:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.377149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.377244 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.377266 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.377303 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.377324 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:57Z","lastTransitionTime":"2026-01-31T04:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.480507 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.480577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.480598 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.480634 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.480655 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:57Z","lastTransitionTime":"2026-01-31T04:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.519751 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/1.log" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.583735 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.583778 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.583788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.583805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.583820 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:57Z","lastTransitionTime":"2026-01-31T04:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.687012 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.687085 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.687097 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.687115 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.687126 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:57Z","lastTransitionTime":"2026-01-31T04:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.789823 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.789895 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.789918 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.789945 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.789963 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:57Z","lastTransitionTime":"2026-01-31T04:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.893459 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.893507 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.893530 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.893554 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.893572 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:57Z","lastTransitionTime":"2026-01-31T04:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:57 crc kubenswrapper[4931]: I0131 04:25:57.931669 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 09:46:10.154883508 +0000 UTC Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.003079 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.003184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.003216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.003285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.003365 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.109050 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.109111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.109126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.109151 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.109171 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.212717 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.212820 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.212841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.212871 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.212892 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.316143 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.316226 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.316249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.316284 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.316308 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.418672 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.418761 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.418780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.418805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.418823 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.521282 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.521359 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.521377 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.521405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.521425 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.625562 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.625638 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.625659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.625688 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.625705 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.728734 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.728800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.728811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.728843 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.728859 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.831567 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.831657 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.831680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.831756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.831784 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.896104 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.896101 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.896122 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.896338 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:25:58 crc kubenswrapper[4931]: E0131 04:25:58.896645 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:25:58 crc kubenswrapper[4931]: E0131 04:25:58.896755 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:25:58 crc kubenswrapper[4931]: E0131 04:25:58.896928 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:25:58 crc kubenswrapper[4931]: E0131 04:25:58.896985 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.932656 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:17:02.048298262 +0000 UTC Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.934767 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.934857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.935076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.935119 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:58 crc kubenswrapper[4931]: I0131 04:25:58.935140 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:58Z","lastTransitionTime":"2026-01-31T04:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.038632 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.038698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.038716 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.038771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.038791 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.142373 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.142442 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.142456 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.142473 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.142484 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.245164 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.245239 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.245267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.245306 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.245330 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.349258 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.349316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.349330 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.349355 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.349369 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.454606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.454676 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.454690 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.454715 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.454757 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.558281 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.558410 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.558461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.558492 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.558543 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.662262 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.662312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.662326 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.662344 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.662357 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.765354 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.765405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.765423 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.765450 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.765472 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.869257 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.869321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.869338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.869367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.869388 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.933197 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:52:03.048239661 +0000 UTC Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.972872 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.972944 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.972964 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.972994 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.973014 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.982470 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.982547 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.982566 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.982597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:25:59 crc kubenswrapper[4931]: I0131 04:25:59.982618 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:25:59Z","lastTransitionTime":"2026-01-31T04:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.049943 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g"] Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.050355 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.053407 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.053631 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.053767 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.054927 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.155088 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1d84d69f-98f1-405b-8115-7b4ada41ff29-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.155442 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d84d69f-98f1-405b-8115-7b4ada41ff29-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.155564 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1d84d69f-98f1-405b-8115-7b4ada41ff29-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.155674 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d84d69f-98f1-405b-8115-7b4ada41ff29-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.155825 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d84d69f-98f1-405b-8115-7b4ada41ff29-service-ca\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.256893 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1d84d69f-98f1-405b-8115-7b4ada41ff29-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.256979 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d84d69f-98f1-405b-8115-7b4ada41ff29-service-ca\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.257014 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d84d69f-98f1-405b-8115-7b4ada41ff29-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.257121 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1d84d69f-98f1-405b-8115-7b4ada41ff29-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.257171 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d84d69f-98f1-405b-8115-7b4ada41ff29-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.257826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1d84d69f-98f1-405b-8115-7b4ada41ff29-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.258055 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1d84d69f-98f1-405b-8115-7b4ada41ff29-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.259495 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d84d69f-98f1-405b-8115-7b4ada41ff29-service-ca\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.268594 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d84d69f-98f1-405b-8115-7b4ada41ff29-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.292522 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d84d69f-98f1-405b-8115-7b4ada41ff29-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-llh8g\" (UID: \"1d84d69f-98f1-405b-8115-7b4ada41ff29\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.374890 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.532360 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" event={"ID":"1d84d69f-98f1-405b-8115-7b4ada41ff29","Type":"ContainerStarted","Data":"333c802e0cdc60822c1d29bcba3b815b424a53ea390f88b5dc462ef8c3bceb9f"} Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.896575 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.896672 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:00 crc kubenswrapper[4931]: E0131 04:26:00.897926 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.896975 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.896842 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:00 crc kubenswrapper[4931]: E0131 04:26:00.898666 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:26:00 crc kubenswrapper[4931]: E0131 04:26:00.898323 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:26:00 crc kubenswrapper[4931]: E0131 04:26:00.897999 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.934271 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:52:51.154051086 +0000 UTC Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.934329 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 04:26:00 crc kubenswrapper[4931]: I0131 04:26:00.946402 4931 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 04:26:01 crc kubenswrapper[4931]: I0131 04:26:01.537812 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" event={"ID":"1d84d69f-98f1-405b-8115-7b4ada41ff29","Type":"ContainerStarted","Data":"429358cc3893d4e07a6dd637bb1dc42f17af944a00fce6f8565cc467b34e968f"} Jan 31 04:26:01 crc kubenswrapper[4931]: I0131 04:26:01.557438 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-llh8g" podStartSLOduration=100.557412158 podStartE2EDuration="1m40.557412158s" podCreationTimestamp="2026-01-31 04:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:01.554819039 +0000 UTC m=+120.364047913" watchObservedRunningTime="2026-01-31 04:26:01.557412158 +0000 UTC m=+120.366641072" Jan 31 04:26:01 crc kubenswrapper[4931]: E0131 04:26:01.837093 4931 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 04:26:02 crc kubenswrapper[4931]: E0131 04:26:02.039287 4931 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:26:02 crc kubenswrapper[4931]: I0131 04:26:02.896093 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:02 crc kubenswrapper[4931]: I0131 04:26:02.896124 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:02 crc kubenswrapper[4931]: I0131 04:26:02.896173 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:02 crc kubenswrapper[4931]: I0131 04:26:02.896213 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:02 crc kubenswrapper[4931]: E0131 04:26:02.896676 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:26:02 crc kubenswrapper[4931]: I0131 04:26:02.896763 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:26:02 crc kubenswrapper[4931]: E0131 04:26:02.896905 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:26:02 crc kubenswrapper[4931]: E0131 04:26:02.897084 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:26:02 crc kubenswrapper[4931]: E0131 04:26:02.897139 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:26:03 crc kubenswrapper[4931]: I0131 04:26:03.548380 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/3.log" Jan 31 04:26:03 crc kubenswrapper[4931]: I0131 04:26:03.551255 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerStarted","Data":"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060"} Jan 31 04:26:03 crc kubenswrapper[4931]: I0131 04:26:03.551663 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:26:03 crc kubenswrapper[4931]: I0131 04:26:03.576999 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podStartSLOduration=101.576981829 podStartE2EDuration="1m41.576981829s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:03.576698021 +0000 UTC m=+122.385926895" watchObservedRunningTime="2026-01-31 04:26:03.576981829 +0000 UTC m=+122.386210703" Jan 31 04:26:03 crc kubenswrapper[4931]: I0131 04:26:03.969830 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4cc6z"] Jan 31 04:26:03 crc kubenswrapper[4931]: I0131 04:26:03.969969 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:03 crc kubenswrapper[4931]: E0131 04:26:03.970100 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:26:04 crc kubenswrapper[4931]: I0131 04:26:04.896267 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:04 crc kubenswrapper[4931]: I0131 04:26:04.896501 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:04 crc kubenswrapper[4931]: E0131 04:26:04.896594 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:26:04 crc kubenswrapper[4931]: E0131 04:26:04.896887 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:26:04 crc kubenswrapper[4931]: I0131 04:26:04.897100 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:04 crc kubenswrapper[4931]: E0131 04:26:04.897259 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:26:05 crc kubenswrapper[4931]: I0131 04:26:05.896691 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:05 crc kubenswrapper[4931]: E0131 04:26:05.896983 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:26:06 crc kubenswrapper[4931]: I0131 04:26:06.896405 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:06 crc kubenswrapper[4931]: E0131 04:26:06.896600 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:26:06 crc kubenswrapper[4931]: I0131 04:26:06.896687 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:06 crc kubenswrapper[4931]: E0131 04:26:06.896886 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:26:06 crc kubenswrapper[4931]: I0131 04:26:06.897147 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:06 crc kubenswrapper[4931]: E0131 04:26:06.897240 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:26:07 crc kubenswrapper[4931]: E0131 04:26:07.040108 4931 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:26:07 crc kubenswrapper[4931]: I0131 04:26:07.896241 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:07 crc kubenswrapper[4931]: E0131 04:26:07.896442 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:26:08 crc kubenswrapper[4931]: I0131 04:26:08.326633 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:26:08 crc kubenswrapper[4931]: I0131 04:26:08.896243 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:08 crc kubenswrapper[4931]: I0131 04:26:08.896304 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:08 crc kubenswrapper[4931]: I0131 04:26:08.896257 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:08 crc kubenswrapper[4931]: E0131 04:26:08.896431 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:26:08 crc kubenswrapper[4931]: E0131 04:26:08.896553 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:26:08 crc kubenswrapper[4931]: E0131 04:26:08.896672 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:26:09 crc kubenswrapper[4931]: I0131 04:26:09.897030 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:09 crc kubenswrapper[4931]: I0131 04:26:09.897516 4931 scope.go:117] "RemoveContainer" containerID="1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1" Jan 31 04:26:09 crc kubenswrapper[4931]: E0131 04:26:09.897532 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:26:10 crc kubenswrapper[4931]: I0131 04:26:10.582914 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/1.log" Jan 31 04:26:10 crc kubenswrapper[4931]: I0131 04:26:10.582998 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r5kkh" event={"ID":"0be95b57-6df4-4ba6-88e8-acf405e3d6d2","Type":"ContainerStarted","Data":"6240b71dda629dd359164d3a9ec2a02be6499b509a6b40f63d85f9bd579218f9"} Jan 31 04:26:10 crc kubenswrapper[4931]: I0131 04:26:10.896328 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:10 crc kubenswrapper[4931]: I0131 04:26:10.896398 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:10 crc kubenswrapper[4931]: I0131 04:26:10.896406 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:10 crc kubenswrapper[4931]: E0131 04:26:10.896545 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:26:10 crc kubenswrapper[4931]: E0131 04:26:10.896753 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:26:10 crc kubenswrapper[4931]: E0131 04:26:10.896967 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:26:11 crc kubenswrapper[4931]: I0131 04:26:11.896094 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:11 crc kubenswrapper[4931]: E0131 04:26:11.898428 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4cc6z" podUID="df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342" Jan 31 04:26:12 crc kubenswrapper[4931]: I0131 04:26:12.896427 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:12 crc kubenswrapper[4931]: I0131 04:26:12.896545 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:12 crc kubenswrapper[4931]: I0131 04:26:12.896579 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:12 crc kubenswrapper[4931]: I0131 04:26:12.900323 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 04:26:12 crc kubenswrapper[4931]: I0131 04:26:12.901291 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 04:26:12 crc kubenswrapper[4931]: I0131 04:26:12.901544 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 04:26:12 crc kubenswrapper[4931]: I0131 04:26:12.901640 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 04:26:13 crc kubenswrapper[4931]: I0131 04:26:13.896052 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:13 crc kubenswrapper[4931]: I0131 04:26:13.899469 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 04:26:13 crc kubenswrapper[4931]: I0131 04:26:13.899469 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.596020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.653713 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.654524 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.658704 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.659358 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.659401 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.659461 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.659552 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.659705 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.659794 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.659984 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.660239 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.677362 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5p2zv"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.678383 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.682350 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.682768 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.683451 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.684408 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.685337 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.685875 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.686216 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.686228 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7z4zh"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.687126 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.690240 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.706806 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.708439 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.709361 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.710669 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.712189 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.724822 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.725572 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.726472 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bmrsg"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.726684 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.727991 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.733834 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8hh7p"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.734390 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bsjw7"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.735671 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.737207 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.737893 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.738468 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.739984 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.740516 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.740999 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.741146 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.741437 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.741879 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pc59\" (UniqueName: \"kubernetes.io/projected/1acbd6a9-2643-4c2c-9a20-4da63545ac23-kube-api-access-7pc59\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.741933 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1acbd6a9-2643-4c2c-9a20-4da63545ac23-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.741956 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1acbd6a9-2643-4c2c-9a20-4da63545ac23-encryption-config\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.741986 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1acbd6a9-2643-4c2c-9a20-4da63545ac23-serving-cert\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.742009 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1acbd6a9-2643-4c2c-9a20-4da63545ac23-etcd-client\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.742041 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1acbd6a9-2643-4c2c-9a20-4da63545ac23-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.742061 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1acbd6a9-2643-4c2c-9a20-4da63545ac23-audit-policies\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.742081 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1acbd6a9-2643-4c2c-9a20-4da63545ac23-audit-dir\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.742358 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.743786 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.744869 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.746189 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bmg8v"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.746566 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.746575 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.749903 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.750617 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.750840 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.751008 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.751189 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.751243 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.751257 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.751352 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.751639 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.751815 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.751858 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.751962 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.752479 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.763036 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-84x2t"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.763439 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.763796 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.764102 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.768659 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.769082 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.769608 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.778638 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.778865 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.778963 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.779067 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.779170 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.779211 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.779173 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.779310 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.780594 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.780637 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.780783 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.780837 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.780947 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.780960 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781081 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.798059 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781637 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x6pwq"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781097 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781698 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781814 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781869 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781901 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781946 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781940 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.781977 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782009 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782015 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782060 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782118 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782126 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782159 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782184 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782193 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782257 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782304 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782346 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782395 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782413 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782423 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.782482 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.823353 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.823472 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.823357 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fd6j6"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.823972 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.825123 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.828112 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w528s"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.828414 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.828605 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.828931 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.828994 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.829052 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.829688 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.829970 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.829979 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.830118 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.830960 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.831278 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.831511 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.831681 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jvt8b"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.832151 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvt8b" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.838402 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.839015 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.839329 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.839991 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.842667 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f411a06-d760-4d52-8939-36856b6813ad-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.842708 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa274e63-34e4-461b-bd7a-270fc7bba034-serving-cert\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.842786 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.842808 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.842834 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.842854 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.842873 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-trusted-ca-bundle\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.842891 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6lrp\" (UniqueName: \"kubernetes.io/projected/13d327eb-39b7-4a67-8a2f-a372ccbbd5de-kube-api-access-p6lrp\") pod \"openshift-config-operator-7777fb866f-lbw2z\" (UID: \"13d327eb-39b7-4a67-8a2f-a372ccbbd5de\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.843560 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-service-ca\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.843586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/13d327eb-39b7-4a67-8a2f-a372ccbbd5de-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbw2z\" (UID: \"13d327eb-39b7-4a67-8a2f-a372ccbbd5de\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.843728 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-node-pullsecrets\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.843935 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.846053 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa274e63-34e4-461b-bd7a-270fc7bba034-config\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.847100 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-oauth-serving-cert\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.847360 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.847418 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-audit\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.847454 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pc59\" (UniqueName: \"kubernetes.io/projected/1acbd6a9-2643-4c2c-9a20-4da63545ac23-kube-api-access-7pc59\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.847661 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-config\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.850422 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.852388 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.853343 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-image-import-ca\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854264 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1acbd6a9-2643-4c2c-9a20-4da63545ac23-encryption-config\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854298 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-etcd-client\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854350 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sws7\" (UniqueName: \"kubernetes.io/projected/7f411a06-d760-4d52-8939-36856b6813ad-kube-api-access-2sws7\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854390 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1acbd6a9-2643-4c2c-9a20-4da63545ac23-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854441 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-client-ca\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854469 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-serving-cert\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854496 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1acbd6a9-2643-4c2c-9a20-4da63545ac23-serving-cert\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854044 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854530 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854565 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa274e63-34e4-461b-bd7a-270fc7bba034-service-ca-bundle\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854595 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-audit-dir\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854633 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dlbg\" (UniqueName: \"kubernetes.io/projected/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-kube-api-access-9dlbg\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854666 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1acbd6a9-2643-4c2c-9a20-4da63545ac23-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854703 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1acbd6a9-2643-4c2c-9a20-4da63545ac23-audit-policies\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854757 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1acbd6a9-2643-4c2c-9a20-4da63545ac23-audit-dir\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854786 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-console-serving-cert\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854813 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d327eb-39b7-4a67-8a2f-a372ccbbd5de-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbw2z\" (UID: \"13d327eb-39b7-4a67-8a2f-a372ccbbd5de\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854844 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxpr\" (UniqueName: \"kubernetes.io/projected/34be6968-eb64-46a3-9e5f-f5568d764d8d-kube-api-access-vtxpr\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854869 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsrm\" (UniqueName: \"kubernetes.io/projected/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-kube-api-access-zpsrm\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854888 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-serving-cert\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854907 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854930 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-policies\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.854980 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs42v\" (UniqueName: \"kubernetes.io/projected/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-kube-api-access-vs42v\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855001 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855008 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-config\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855033 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f411a06-d760-4d52-8939-36856b6813ad-config\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855070 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa274e63-34e4-461b-bd7a-270fc7bba034-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855128 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1acbd6a9-2643-4c2c-9a20-4da63545ac23-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855125 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855127 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1acbd6a9-2643-4c2c-9a20-4da63545ac23-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855197 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-console-config\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855245 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855255 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855299 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855259 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-dir\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855357 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-encryption-config\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855301 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1acbd6a9-2643-4c2c-9a20-4da63545ac23-audit-dir\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855378 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855408 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-etcd-serving-ca\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855428 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855435 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855444 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-console-oauth-config\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855462 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1acbd6a9-2643-4c2c-9a20-4da63545ac23-etcd-client\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855513 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855541 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855867 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1acbd6a9-2643-4c2c-9a20-4da63545ac23-audit-policies\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.855887 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.856682 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-s5dt7"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.857602 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.858555 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.858959 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8gx6\" (UniqueName: \"kubernetes.io/projected/fa274e63-34e4-461b-bd7a-270fc7bba034-kube-api-access-w8gx6\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.859005 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.859040 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f411a06-d760-4d52-8939-36856b6813ad-images\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.860042 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.860109 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.860546 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.860903 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.861131 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.862583 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.862904 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.863023 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.870308 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.871493 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1acbd6a9-2643-4c2c-9a20-4da63545ac23-etcd-client\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.873011 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.873243 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1acbd6a9-2643-4c2c-9a20-4da63545ac23-serving-cert\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.873556 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1acbd6a9-2643-4c2c-9a20-4da63545ac23-encryption-config\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.875585 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.876542 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.877265 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.879207 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.885404 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dm9q4"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.885625 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.887708 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.887838 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.888145 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-894fk"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.888252 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.888540 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.888734 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.888871 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.889081 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.889198 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.891365 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.892893 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.893744 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.894157 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.894597 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.896835 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nqffw"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.897273 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.897789 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.899008 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.899847 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.903979 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drqrf"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.904707 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.911620 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5p2zv"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.916550 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8hh7p"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.917125 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.918270 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.919475 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.920504 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.921258 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bmrsg"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.922463 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q7slc"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.924380 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.924511 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.925110 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fd6j6"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.926843 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x6pwq"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.928295 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.929627 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.930887 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-84x2t"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.932877 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bsjw7"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.934793 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.936166 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w528s"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.937147 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.937281 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.938151 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dm9q4"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.938994 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.940344 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.940992 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.941879 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.942782 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bmg8v"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.944669 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvt8b"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.946371 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5g5dr"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.947398 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ggk58"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.947559 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.948428 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7z4zh"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.948526 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.949291 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.950276 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nqffw"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.951225 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.952228 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.953928 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5g5dr"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.955469 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.956438 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drqrf"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.957385 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.958322 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-894fk"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.959301 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.959588 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-image-import-ca\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.959632 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.959661 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-etcd-client\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.959684 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sws7\" (UniqueName: \"kubernetes.io/projected/7f411a06-d760-4d52-8939-36856b6813ad-kube-api-access-2sws7\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.959968 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-client-ca\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960007 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-serving-cert\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960034 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960060 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-audit-dir\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960081 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa274e63-34e4-461b-bd7a-270fc7bba034-service-ca-bundle\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960106 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dlbg\" (UniqueName: \"kubernetes.io/projected/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-kube-api-access-9dlbg\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960132 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-console-serving-cert\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960235 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d327eb-39b7-4a67-8a2f-a372ccbbd5de-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbw2z\" (UID: \"13d327eb-39b7-4a67-8a2f-a372ccbbd5de\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960455 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-audit-dir\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960264 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtxpr\" (UniqueName: \"kubernetes.io/projected/34be6968-eb64-46a3-9e5f-f5568d764d8d-kube-api-access-vtxpr\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960675 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpsrm\" (UniqueName: \"kubernetes.io/projected/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-kube-api-access-zpsrm\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960703 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-serving-cert\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960744 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960770 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs42v\" (UniqueName: \"kubernetes.io/projected/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-kube-api-access-vs42v\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.960795 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-policies\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.961714 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa274e63-34e4-461b-bd7a-270fc7bba034-service-ca-bundle\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.963384 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-client-ca\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.963498 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-image-import-ca\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.963649 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.963743 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-config\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.963803 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f411a06-d760-4d52-8939-36856b6813ad-config\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.963852 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa274e63-34e4-461b-bd7a-270fc7bba034-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.963932 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-policies\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.963974 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964025 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-console-config\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964057 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964108 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-dir\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964286 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-encryption-config\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964318 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964573 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f411a06-d760-4d52-8939-36856b6813ad-config\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964594 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-etcd-serving-ca\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964640 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-console-oauth-config\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964659 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-dir\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964680 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964866 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8gx6\" (UniqueName: \"kubernetes.io/projected/fa274e63-34e4-461b-bd7a-270fc7bba034-kube-api-access-w8gx6\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964913 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964822 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-config\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965014 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f411a06-d760-4d52-8939-36856b6813ad-images\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965040 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f411a06-d760-4d52-8939-36856b6813ad-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965065 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa274e63-34e4-461b-bd7a-270fc7bba034-serving-cert\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965182 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965212 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965358 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965392 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965484 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-trusted-ca-bundle\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965510 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6lrp\" (UniqueName: \"kubernetes.io/projected/13d327eb-39b7-4a67-8a2f-a372ccbbd5de-kube-api-access-p6lrp\") pod \"openshift-config-operator-7777fb866f-lbw2z\" (UID: \"13d327eb-39b7-4a67-8a2f-a372ccbbd5de\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965587 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-console-config\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965537 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-service-ca\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965635 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/13d327eb-39b7-4a67-8a2f-a372ccbbd5de-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbw2z\" (UID: \"13d327eb-39b7-4a67-8a2f-a372ccbbd5de\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965643 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-etcd-serving-ca\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965657 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-node-pullsecrets\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965789 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965816 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa274e63-34e4-461b-bd7a-270fc7bba034-config\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.964584 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-serving-cert\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.965857 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-oauth-serving-cert\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.966103 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-audit\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.966142 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-config\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.966319 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.966947 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-oauth-serving-cert\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.967288 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f411a06-d760-4d52-8939-36856b6813ad-images\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.968015 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.968463 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/13d327eb-39b7-4a67-8a2f-a372ccbbd5de-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbw2z\" (UID: \"13d327eb-39b7-4a67-8a2f-a372ccbbd5de\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.968567 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-audit\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.968857 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-node-pullsecrets\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.968980 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-console-oauth-config\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.969030 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-etcd-client\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.969018 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa274e63-34e4-461b-bd7a-270fc7bba034-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.969517 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.969752 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d327eb-39b7-4a67-8a2f-a372ccbbd5de-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbw2z\" (UID: \"13d327eb-39b7-4a67-8a2f-a372ccbbd5de\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.970022 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.970071 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa274e63-34e4-461b-bd7a-270fc7bba034-config\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.966949 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-config\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.970450 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.970860 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-console-serving-cert\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.971022 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-service-ca\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.971478 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.972291 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.972768 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-trusted-ca-bundle\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.973585 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f411a06-d760-4d52-8939-36856b6813ad-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.973706 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.973707 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.973882 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa274e63-34e4-461b-bd7a-270fc7bba034-serving-cert\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.973920 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.974708 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-serving-cert\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.976003 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.976500 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-encryption-config\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.978773 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.979117 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q7slc"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.982237 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.984978 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.987689 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.989101 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9fzp8"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.989971 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9fzp8" Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.990477 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9fzp8"] Jan 31 04:26:20 crc kubenswrapper[4931]: I0131 04:26:20.999012 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.017765 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.038558 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.057016 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.078226 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.096943 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.116617 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.137821 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.165302 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.178761 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.197075 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.218214 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.238425 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.258772 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.317437 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.325317 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pc59\" (UniqueName: \"kubernetes.io/projected/1acbd6a9-2643-4c2c-9a20-4da63545ac23-kube-api-access-7pc59\") pod \"apiserver-7bbb656c7d-hvjk2\" (UID: \"1acbd6a9-2643-4c2c-9a20-4da63545ac23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.337472 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.357844 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.378308 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.398080 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.417407 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.438095 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.458333 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.478408 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.498515 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.517648 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.538834 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.558791 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.577936 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.581093 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.597591 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.618277 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.639582 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.658864 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.679826 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.698920 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.718016 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.739127 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.760861 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.778099 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.798005 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.818275 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.837509 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.857185 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.867117 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2"] Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.890757 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.899310 4931 request.go:700] Waited for 1.01062612s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.902854 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.917269 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.938046 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.958892 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.978867 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 04:26:21 crc kubenswrapper[4931]: I0131 04:26:21.998493 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.017499 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.037386 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.058217 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.078230 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.098761 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.118785 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.138714 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.158410 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.177691 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.198473 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.217347 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.238441 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.256448 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.277687 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.297228 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.317139 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.337604 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.357530 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.387662 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.398590 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.419051 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.437609 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.457386 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.478003 4931 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.499688 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.518118 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.538103 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.558148 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.577282 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.599299 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.619273 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.635043 4931 generic.go:334] "Generic (PLEG): container finished" podID="1acbd6a9-2643-4c2c-9a20-4da63545ac23" containerID="2a30d0a57b3ee6ac0c0ddc307d6ad246d429d91fcf1c743ba272b150a54fec53" exitCode=0 Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.635107 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" event={"ID":"1acbd6a9-2643-4c2c-9a20-4da63545ac23","Type":"ContainerDied","Data":"2a30d0a57b3ee6ac0c0ddc307d6ad246d429d91fcf1c743ba272b150a54fec53"} Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.635150 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" event={"ID":"1acbd6a9-2643-4c2c-9a20-4da63545ac23","Type":"ContainerStarted","Data":"c00f4c055790a962dac730735eb04d5ce5fc6f9b67dbe380a12e9a0ea4a406f9"} Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.637967 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.710188 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sws7\" (UniqueName: \"kubernetes.io/projected/7f411a06-d760-4d52-8939-36856b6813ad-kube-api-access-2sws7\") pod \"machine-api-operator-5694c8668f-5p2zv\" (UID: \"7f411a06-d760-4d52-8939-36856b6813ad\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.730255 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtxpr\" (UniqueName: \"kubernetes.io/projected/34be6968-eb64-46a3-9e5f-f5568d764d8d-kube-api-access-vtxpr\") pod \"oauth-openshift-558db77b4-bmrsg\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.748073 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dlbg\" (UniqueName: \"kubernetes.io/projected/7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea-kube-api-access-9dlbg\") pod \"apiserver-76f77b778f-bsjw7\" (UID: \"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea\") " pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.766472 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpsrm\" (UniqueName: \"kubernetes.io/projected/d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048-kube-api-access-zpsrm\") pod \"console-f9d7485db-8hh7p\" (UID: \"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048\") " pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.777976 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs42v\" (UniqueName: \"kubernetes.io/projected/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-kube-api-access-vs42v\") pod \"route-controller-manager-6576b87f9c-hv6zv\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.795180 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8gx6\" (UniqueName: \"kubernetes.io/projected/fa274e63-34e4-461b-bd7a-270fc7bba034-kube-api-access-w8gx6\") pod \"authentication-operator-69f744f599-7z4zh\" (UID: \"fa274e63-34e4-461b-bd7a-270fc7bba034\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.818667 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.826940 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6lrp\" (UniqueName: \"kubernetes.io/projected/13d327eb-39b7-4a67-8a2f-a372ccbbd5de-kube-api-access-p6lrp\") pod \"openshift-config-operator-7777fb866f-lbw2z\" (UID: \"13d327eb-39b7-4a67-8a2f-a372ccbbd5de\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.838394 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.845780 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.859214 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.859269 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.873049 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.878856 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.904964 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/135975db-0bfc-4fb8-b014-a3e51817c777-metrics-tls\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.905115 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7spv\" (UniqueName: \"kubernetes.io/projected/135975db-0bfc-4fb8-b014-a3e51817c777-kube-api-access-f7spv\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.905161 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/928c64fe-ab31-4942-9bdd-64ab8b8339aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.905238 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc75l\" (UniqueName: \"kubernetes.io/projected/faa586ad-300a-4b7d-bb6d-e355b18670d6-kube-api-access-fc75l\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.905326 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-certificates\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.905392 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e81f063c-6cde-4533-ac88-72044b1c8eef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.905431 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de94b2f3-5852-49ce-81f2-daad119be292-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d6vx6\" (UID: \"de94b2f3-5852-49ce-81f2-daad119be292\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.906337 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08b24fc-253c-44c8-a272-78f4601644b1-serving-cert\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.906470 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/928c64fe-ab31-4942-9bdd-64ab8b8339aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.906586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdpf5\" (UniqueName: \"kubernetes.io/projected/f08b24fc-253c-44c8-a272-78f4601644b1-kube-api-access-bdpf5\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.907522 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa586ad-300a-4b7d-bb6d-e355b18670d6-config\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.907922 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908024 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e81f063c-6cde-4533-ac88-72044b1c8eef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908063 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf9dc\" (UniqueName: \"kubernetes.io/projected/e81f063c-6cde-4533-ac88-72044b1c8eef-kube-api-access-gf9dc\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908115 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-tls\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908148 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2r6r\" (UniqueName: \"kubernetes.io/projected/70ea039d-762a-4d09-a1a3-45d32be0e754-kube-api-access-b2r6r\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908406 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/135975db-0bfc-4fb8-b014-a3e51817c777-trusted-ca\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908441 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de94b2f3-5852-49ce-81f2-daad119be292-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d6vx6\" (UID: \"de94b2f3-5852-49ce-81f2-daad119be292\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908511 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-config\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908610 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttr4l\" (UniqueName: \"kubernetes.io/projected/a74f2108-1c90-40a5-853b-55503384e185-kube-api-access-ttr4l\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908759 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908801 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l477s\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-kube-api-access-l477s\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.908834 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1bd05f57-7c4f-4c00-96ce-e8f92338d14d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-49jws\" (UID: \"1bd05f57-7c4f-4c00-96ce-e8f92338d14d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" Jan 31 04:26:22 crc kubenswrapper[4931]: E0131 04:26:22.912108 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:23.412084011 +0000 UTC m=+142.221312915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.914460 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-client-ca\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.914517 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de94b2f3-5852-49ce-81f2-daad119be292-config\") pod \"kube-apiserver-operator-766d6c64bb-d6vx6\" (UID: \"de94b2f3-5852-49ce-81f2-daad119be292\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.914572 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrpf9\" (UniqueName: \"kubernetes.io/projected/07927adb-887c-4259-91ca-b46c8f9809e4-kube-api-access-mrpf9\") pod \"openshift-apiserver-operator-796bbdcf4f-7pgt9\" (UID: \"07927adb-887c-4259-91ca-b46c8f9809e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.914634 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e81f063c-6cde-4533-ac88-72044b1c8eef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.914764 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f08b24fc-253c-44c8-a272-78f4601644b1-etcd-ca\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.915071 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-trusted-ca\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.915127 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07927adb-887c-4259-91ca-b46c8f9809e4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7pgt9\" (UID: \"07927adb-887c-4259-91ca-b46c8f9809e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.915233 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a74f2108-1c90-40a5-853b-55503384e185-auth-proxy-config\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.915307 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08b24fc-253c-44c8-a272-78f4601644b1-config\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.915330 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1588ab56-bcb9-4baa-bf88-8edca43a7ba5-metrics-tls\") pod \"dns-operator-744455d44c-x6pwq\" (UID: \"1588ab56-bcb9-4baa-bf88-8edca43a7ba5\") " pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.915351 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7n4t\" (UniqueName: \"kubernetes.io/projected/1588ab56-bcb9-4baa-bf88-8edca43a7ba5-kube-api-access-d7n4t\") pod \"dns-operator-744455d44c-x6pwq\" (UID: \"1588ab56-bcb9-4baa-bf88-8edca43a7ba5\") " pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917067 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/faa586ad-300a-4b7d-bb6d-e355b18670d6-trusted-ca\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917106 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5789585-9fd2-4f7a-9ac7-8ff947f60c2e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-grxxg\" (UID: \"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917142 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f08b24fc-253c-44c8-a272-78f4601644b1-etcd-client\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917181 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ea039d-762a-4d09-a1a3-45d32be0e754-serving-cert\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917202 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5789585-9fd2-4f7a-9ac7-8ff947f60c2e-config\") pod \"kube-controller-manager-operator-78b949d7b-grxxg\" (UID: \"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917223 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74f2108-1c90-40a5-853b-55503384e185-config\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917287 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a74f2108-1c90-40a5-853b-55503384e185-machine-approver-tls\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917367 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f08b24fc-253c-44c8-a272-78f4601644b1-etcd-service-ca\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917389 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/135975db-0bfc-4fb8-b014-a3e51817c777-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917480 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07927adb-887c-4259-91ca-b46c8f9809e4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7pgt9\" (UID: \"07927adb-887c-4259-91ca-b46c8f9809e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917503 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdj2\" (UniqueName: \"kubernetes.io/projected/1bd05f57-7c4f-4c00-96ce-e8f92338d14d-kube-api-access-wzdj2\") pod \"cluster-samples-operator-665b6dd947-49jws\" (UID: \"1bd05f57-7c4f-4c00-96ce-e8f92338d14d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917525 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5789585-9fd2-4f7a-9ac7-8ff947f60c2e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-grxxg\" (UID: \"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917546 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6bbp\" (UniqueName: \"kubernetes.io/projected/fd126257-f24f-4597-8e75-e6d1c24e8709-kube-api-access-v6bbp\") pod \"downloads-7954f5f757-jvt8b\" (UID: \"fd126257-f24f-4597-8e75-e6d1c24e8709\") " pod="openshift-console/downloads-7954f5f757-jvt8b" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917612 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-bound-sa-token\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.917633 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa586ad-300a-4b7d-bb6d-e355b18670d6-serving-cert\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.927170 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.951127 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.954944 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:22 crc kubenswrapper[4931]: I0131 04:26:22.991545 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020010 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020292 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-mountpoint-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020326 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd2lk\" (UniqueName: \"kubernetes.io/projected/39de16fa-35f7-4286-8160-f29fd1059389-kube-api-access-pd2lk\") pod \"service-ca-operator-777779d784-894fk\" (UID: \"39de16fa-35f7-4286-8160-f29fd1059389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020355 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce116733-a153-44bf-838d-7bd7593a3b96-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9gcsx\" (UID: \"ce116733-a153-44bf-838d-7bd7593a3b96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020387 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-client-ca\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020413 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de94b2f3-5852-49ce-81f2-daad119be292-config\") pod \"kube-apiserver-operator-766d6c64bb-d6vx6\" (UID: \"de94b2f3-5852-49ce-81f2-daad119be292\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020436 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d88b1b9a-0913-4310-a020-006c16fd5fff-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h4skc\" (UID: \"d88b1b9a-0913-4310-a020-006c16fd5fff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020457 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-socket-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020480 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e81f063c-6cde-4533-ac88-72044b1c8eef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020507 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e32d743c-c801-41a1-9f35-3b641554339f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dm9q4\" (UID: \"e32d743c-c801-41a1-9f35-3b641554339f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020529 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f08b24fc-253c-44c8-a272-78f4601644b1-etcd-ca\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020551 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgq4x\" (UniqueName: \"kubernetes.io/projected/3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c-kube-api-access-xgq4x\") pod \"ingress-canary-9fzp8\" (UID: \"3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c\") " pod="openshift-ingress-canary/ingress-canary-9fzp8" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020575 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a4f1385-7a3c-4195-8b66-1df921d7187b-proxy-tls\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020597 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv9r2\" (UniqueName: \"kubernetes.io/projected/50968d0b-3fb0-4208-b5df-b1a07c341f0a-kube-api-access-vv9r2\") pod \"dns-default-5g5dr\" (UID: \"50968d0b-3fb0-4208-b5df-b1a07c341f0a\") " pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020621 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7be93e4-9ead-466c-90d5-a6b53cc0c1fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p5px\" (UID: \"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020643 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnp2\" (UniqueName: \"kubernetes.io/projected/0f58bb1e-365a-4d56-8b48-5cb0e8e12982-kube-api-access-crnp2\") pod \"migrator-59844c95c7-zlqj2\" (UID: \"0f58bb1e-365a-4d56-8b48-5cb0e8e12982\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020664 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07927adb-887c-4259-91ca-b46c8f9809e4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7pgt9\" (UID: \"07927adb-887c-4259-91ca-b46c8f9809e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020685 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7595206-8944-4009-bcd7-f9952d225277-secret-volume\") pod \"collect-profiles-29497215-j87zm\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020706 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/faa586ad-300a-4b7d-bb6d-e355b18670d6-trusted-ca\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020742 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bb0556-24cb-479a-bbde-e360630fe24a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tvsqj\" (UID: \"10bb0556-24cb-479a-bbde-e360630fe24a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020762 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlmm6\" (UniqueName: \"kubernetes.io/projected/10bb0556-24cb-479a-bbde-e360630fe24a-kube-api-access-vlmm6\") pod \"kube-storage-version-migrator-operator-b67b599dd-tvsqj\" (UID: \"10bb0556-24cb-479a-bbde-e360630fe24a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020790 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5789585-9fd2-4f7a-9ac7-8ff947f60c2e-config\") pod \"kube-controller-manager-operator-78b949d7b-grxxg\" (UID: \"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020840 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19835a65-0362-4aa9-9152-540437cd2d90-webhook-cert\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020864 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ng8\" (UniqueName: \"kubernetes.io/projected/b94cc359-b91d-4058-9b99-daee5cb58497-kube-api-access-c2ng8\") pod \"marketplace-operator-79b997595-drqrf\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020893 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/29ed5258-385e-4835-89ed-03e3c21cc7cb-signing-key\") pod \"service-ca-9c57cc56f-nqffw\" (UID: \"29ed5258-385e-4835-89ed-03e3c21cc7cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020921 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fefdfe52-daca-429b-af44-e9f855996b8e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nh9j9\" (UID: \"fefdfe52-daca-429b-af44-e9f855996b8e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020947 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-plugins-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020971 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmmk\" (UniqueName: \"kubernetes.io/projected/0a4f1385-7a3c-4195-8b66-1df921d7187b-kube-api-access-kwmmk\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.020990 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f08b24fc-253c-44c8-a272-78f4601644b1-etcd-service-ca\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021010 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/135975db-0bfc-4fb8-b014-a3e51817c777-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021032 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlrh\" (UniqueName: \"kubernetes.io/projected/fefdfe52-daca-429b-af44-e9f855996b8e-kube-api-access-twlrh\") pod \"package-server-manager-789f6589d5-nh9j9\" (UID: \"fefdfe52-daca-429b-af44-e9f855996b8e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021050 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce116733-a153-44bf-838d-7bd7593a3b96-proxy-tls\") pod \"machine-config-controller-84d6567774-9gcsx\" (UID: \"ce116733-a153-44bf-838d-7bd7593a3b96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021073 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdj2\" (UniqueName: \"kubernetes.io/projected/1bd05f57-7c4f-4c00-96ce-e8f92338d14d-kube-api-access-wzdj2\") pod \"cluster-samples-operator-665b6dd947-49jws\" (UID: \"1bd05f57-7c4f-4c00-96ce-e8f92338d14d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021094 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/29ed5258-385e-4835-89ed-03e3c21cc7cb-signing-cabundle\") pod \"service-ca-9c57cc56f-nqffw\" (UID: \"29ed5258-385e-4835-89ed-03e3c21cc7cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021112 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c-cert\") pod \"ingress-canary-9fzp8\" (UID: \"3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c\") " pod="openshift-ingress-canary/ingress-canary-9fzp8" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021133 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsw87\" (UniqueName: \"kubernetes.io/projected/e32d743c-c801-41a1-9f35-3b641554339f-kube-api-access-fsw87\") pod \"multus-admission-controller-857f4d67dd-dm9q4\" (UID: \"e32d743c-c801-41a1-9f35-3b641554339f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021168 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-bound-sa-token\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021195 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7be93e4-9ead-466c-90d5-a6b53cc0c1fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p5px\" (UID: \"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021229 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-766sg\" (UniqueName: \"kubernetes.io/projected/16b7b136-dad4-4347-9941-d97a23fa694c-kube-api-access-766sg\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgjvq\" (UID: \"16b7b136-dad4-4347-9941-d97a23fa694c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021249 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/135975db-0bfc-4fb8-b014-a3e51817c777-metrics-tls\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021271 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39de16fa-35f7-4286-8160-f29fd1059389-serving-cert\") pod \"service-ca-operator-777779d784-894fk\" (UID: \"39de16fa-35f7-4286-8160-f29fd1059389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021295 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7spv\" (UniqueName: \"kubernetes.io/projected/135975db-0bfc-4fb8-b014-a3e51817c777-kube-api-access-f7spv\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021317 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/433c2cfb-3f6c-4fe2-9289-237f398dea0b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkzm6\" (UID: \"433c2cfb-3f6c-4fe2-9289-237f398dea0b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021340 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-certificates\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021365 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19835a65-0362-4aa9-9152-540437cd2d90-apiservice-cert\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021390 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a4f1385-7a3c-4195-8b66-1df921d7187b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021413 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08b24fc-253c-44c8-a272-78f4601644b1-serving-cert\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021444 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/928c64fe-ab31-4942-9bdd-64ab8b8339aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021463 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbkdk\" (UniqueName: \"kubernetes.io/projected/c1a6dcc8-df4b-47d7-b871-30b183c83de2-kube-api-access-jbkdk\") pod \"machine-config-server-ggk58\" (UID: \"c1a6dcc8-df4b-47d7-b871-30b183c83de2\") " pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021489 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa586ad-300a-4b7d-bb6d-e355b18670d6-config\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021509 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021527 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/19835a65-0362-4aa9-9152-540437cd2d90-tmpfs\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021547 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-tls\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021567 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2r6r\" (UniqueName: \"kubernetes.io/projected/70ea039d-762a-4d09-a1a3-45d32be0e754-kube-api-access-b2r6r\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021592 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98536bf1-6e84-479c-b744-648cd081d555-srv-cert\") pod \"catalog-operator-68c6474976-sf6b2\" (UID: \"98536bf1-6e84-479c-b744-648cd081d555\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021615 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9655t\" (UniqueName: \"kubernetes.io/projected/29ed5258-385e-4835-89ed-03e3c21cc7cb-kube-api-access-9655t\") pod \"service-ca-9c57cc56f-nqffw\" (UID: \"29ed5258-385e-4835-89ed-03e3c21cc7cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021638 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/135975db-0bfc-4fb8-b014-a3e51817c777-trusted-ca\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021659 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de94b2f3-5852-49ce-81f2-daad119be292-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d6vx6\" (UID: \"de94b2f3-5852-49ce-81f2-daad119be292\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021669 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f08b24fc-253c-44c8-a272-78f4601644b1-etcd-ca\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021685 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blg7x\" (UniqueName: \"kubernetes.io/projected/98536bf1-6e84-479c-b744-648cd081d555-kube-api-access-blg7x\") pod \"catalog-operator-68c6474976-sf6b2\" (UID: \"98536bf1-6e84-479c-b744-648cd081d555\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021789 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttr4l\" (UniqueName: \"kubernetes.io/projected/a74f2108-1c90-40a5-853b-55503384e185-kube-api-access-ttr4l\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.021850 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l477s\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-kube-api-access-l477s\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.024299 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa586ad-300a-4b7d-bb6d-e355b18670d6-config\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.025544 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5789585-9fd2-4f7a-9ac7-8ff947f60c2e-config\") pod \"kube-controller-manager-operator-78b949d7b-grxxg\" (UID: \"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.025674 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f08b24fc-253c-44c8-a272-78f4601644b1-etcd-service-ca\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.025813 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07927adb-887c-4259-91ca-b46c8f9809e4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7pgt9\" (UID: \"07927adb-887c-4259-91ca-b46c8f9809e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.026100 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.026495 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1bd05f57-7c4f-4c00-96ce-e8f92338d14d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-49jws\" (UID: \"1bd05f57-7c4f-4c00-96ce-e8f92338d14d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.026544 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsj8c\" (UniqueName: \"kubernetes.io/projected/ce116733-a153-44bf-838d-7bd7593a3b96-kube-api-access-hsj8c\") pod \"machine-config-controller-84d6567774-9gcsx\" (UID: \"ce116733-a153-44bf-838d-7bd7593a3b96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.026567 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d88b1b9a-0913-4310-a020-006c16fd5fff-srv-cert\") pod \"olm-operator-6b444d44fb-h4skc\" (UID: \"d88b1b9a-0913-4310-a020-006c16fd5fff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.026588 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc0e3a20-b429-47e1-8100-aa1fae313bf7-stats-auth\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.026607 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-drqrf\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.026628 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/50968d0b-3fb0-4208-b5df-b1a07c341f0a-metrics-tls\") pod \"dns-default-5g5dr\" (UID: \"50968d0b-3fb0-4208-b5df-b1a07c341f0a\") " pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.026697 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:23.526679278 +0000 UTC m=+142.335908152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.026926 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-client-ca\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.027121 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/928c64fe-ab31-4942-9bdd-64ab8b8339aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.027198 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c1a6dcc8-df4b-47d7-b871-30b183c83de2-node-bootstrap-token\") pod \"machine-config-server-ggk58\" (UID: \"c1a6dcc8-df4b-47d7-b871-30b183c83de2\") " pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.027832 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrpf9\" (UniqueName: \"kubernetes.io/projected/07927adb-887c-4259-91ca-b46c8f9809e4-kube-api-access-mrpf9\") pod \"openshift-apiserver-operator-796bbdcf4f-7pgt9\" (UID: \"07927adb-887c-4259-91ca-b46c8f9809e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.027859 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c1a6dcc8-df4b-47d7-b871-30b183c83de2-certs\") pod \"machine-config-server-ggk58\" (UID: \"c1a6dcc8-df4b-47d7-b871-30b183c83de2\") " pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.027870 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-certificates\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.027878 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc0e3a20-b429-47e1-8100-aa1fae313bf7-metrics-certs\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.027921 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc0e3a20-b429-47e1-8100-aa1fae313bf7-default-certificate\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.027942 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh282\" (UniqueName: \"kubernetes.io/projected/fc0e3a20-b429-47e1-8100-aa1fae313bf7-kube-api-access-zh282\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028131 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-trusted-ca\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028170 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a74f2108-1c90-40a5-853b-55503384e185-auth-proxy-config\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028180 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de94b2f3-5852-49ce-81f2-daad119be292-config\") pod \"kube-apiserver-operator-766d6c64bb-d6vx6\" (UID: \"de94b2f3-5852-49ce-81f2-daad119be292\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028191 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433c2cfb-3f6c-4fe2-9289-237f398dea0b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkzm6\" (UID: \"433c2cfb-3f6c-4fe2-9289-237f398dea0b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028258 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08b24fc-253c-44c8-a272-78f4601644b1-config\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028288 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1588ab56-bcb9-4baa-bf88-8edca43a7ba5-metrics-tls\") pod \"dns-operator-744455d44c-x6pwq\" (UID: \"1588ab56-bcb9-4baa-bf88-8edca43a7ba5\") " pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028308 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7n4t\" (UniqueName: \"kubernetes.io/projected/1588ab56-bcb9-4baa-bf88-8edca43a7ba5-kube-api-access-d7n4t\") pod \"dns-operator-744455d44c-x6pwq\" (UID: \"1588ab56-bcb9-4baa-bf88-8edca43a7ba5\") " pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028373 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f08b24fc-253c-44c8-a272-78f4601644b1-etcd-client\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028401 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5789585-9fd2-4f7a-9ac7-8ff947f60c2e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-grxxg\" (UID: \"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028432 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc0e3a20-b429-47e1-8100-aa1fae313bf7-service-ca-bundle\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029353 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ea039d-762a-4d09-a1a3-45d32be0e754-serving-cert\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029377 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74f2108-1c90-40a5-853b-55503384e185-config\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029405 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a74f2108-1c90-40a5-853b-55503384e185-machine-approver-tls\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029418 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08b24fc-253c-44c8-a272-78f4601644b1-config\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029431 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/16b7b136-dad4-4347-9941-d97a23fa694c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgjvq\" (UID: \"16b7b136-dad4-4347-9941-d97a23fa694c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028703 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/135975db-0bfc-4fb8-b014-a3e51817c777-trusted-ca\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028827 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-tls\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029478 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-drqrf\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029522 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07927adb-887c-4259-91ca-b46c8f9809e4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7pgt9\" (UID: \"07927adb-887c-4259-91ca-b46c8f9809e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029549 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66cq\" (UniqueName: \"kubernetes.io/projected/d88b1b9a-0913-4310-a020-006c16fd5fff-kube-api-access-t66cq\") pod \"olm-operator-6b444d44fb-h4skc\" (UID: \"d88b1b9a-0913-4310-a020-006c16fd5fff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029555 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/faa586ad-300a-4b7d-bb6d-e355b18670d6-trusted-ca\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029579 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5789585-9fd2-4f7a-9ac7-8ff947f60c2e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-grxxg\" (UID: \"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029770 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-trusted-ca\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028900 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a74f2108-1c90-40a5-853b-55503384e185-auth-proxy-config\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.028991 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/135975db-0bfc-4fb8-b014-a3e51817c777-metrics-tls\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029155 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e81f063c-6cde-4533-ac88-72044b1c8eef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029903 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6bbp\" (UniqueName: \"kubernetes.io/projected/fd126257-f24f-4597-8e75-e6d1c24e8709-kube-api-access-v6bbp\") pod \"downloads-7954f5f757-jvt8b\" (UID: \"fd126257-f24f-4597-8e75-e6d1c24e8709\") " pod="openshift-console/downloads-7954f5f757-jvt8b" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029936 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7595206-8944-4009-bcd7-f9952d225277-config-volume\") pod \"collect-profiles-29497215-j87zm\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029953 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50968d0b-3fb0-4208-b5df-b1a07c341f0a-config-volume\") pod \"dns-default-5g5dr\" (UID: \"50968d0b-3fb0-4208-b5df-b1a07c341f0a\") " pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.029978 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa586ad-300a-4b7d-bb6d-e355b18670d6-serving-cert\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030005 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39de16fa-35f7-4286-8160-f29fd1059389-config\") pod \"service-ca-operator-777779d784-894fk\" (UID: \"39de16fa-35f7-4286-8160-f29fd1059389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030025 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/928c64fe-ab31-4942-9bdd-64ab8b8339aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030046 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc75l\" (UniqueName: \"kubernetes.io/projected/faa586ad-300a-4b7d-bb6d-e355b18670d6-kube-api-access-fc75l\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030068 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cml4\" (UniqueName: \"kubernetes.io/projected/d7595206-8944-4009-bcd7-f9952d225277-kube-api-access-8cml4\") pod \"collect-profiles-29497215-j87zm\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030088 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e81f063c-6cde-4533-ac88-72044b1c8eef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030106 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de94b2f3-5852-49ce-81f2-daad119be292-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d6vx6\" (UID: \"de94b2f3-5852-49ce-81f2-daad119be292\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030123 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10bb0556-24cb-479a-bbde-e360630fe24a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tvsqj\" (UID: \"10bb0556-24cb-479a-bbde-e360630fe24a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030154 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433c2cfb-3f6c-4fe2-9289-237f398dea0b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkzm6\" (UID: \"433c2cfb-3f6c-4fe2-9289-237f398dea0b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030172 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-registration-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030189 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0a4f1385-7a3c-4195-8b66-1df921d7187b-images\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030245 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdpf5\" (UniqueName: \"kubernetes.io/projected/f08b24fc-253c-44c8-a272-78f4601644b1-kube-api-access-bdpf5\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030264 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e81f063c-6cde-4533-ac88-72044b1c8eef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030281 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf9dc\" (UniqueName: \"kubernetes.io/projected/e81f063c-6cde-4533-ac88-72044b1c8eef-kube-api-access-gf9dc\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030284 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07927adb-887c-4259-91ca-b46c8f9809e4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7pgt9\" (UID: \"07927adb-887c-4259-91ca-b46c8f9809e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030309 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d6lp\" (UniqueName: \"kubernetes.io/projected/770df778-b615-43fd-a60d-914f5691e3ac-kube-api-access-7d6lp\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030331 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98536bf1-6e84-479c-b744-648cd081d555-profile-collector-cert\") pod \"catalog-operator-68c6474976-sf6b2\" (UID: \"98536bf1-6e84-479c-b744-648cd081d555\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030354 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6fmq\" (UniqueName: \"kubernetes.io/projected/c7be93e4-9ead-466c-90d5-a6b53cc0c1fd-kube-api-access-g6fmq\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p5px\" (UID: \"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030374 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qqfb\" (UniqueName: \"kubernetes.io/projected/19835a65-0362-4aa9-9152-540437cd2d90-kube-api-access-6qqfb\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030395 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-csi-data-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030419 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-config\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.030838 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1bd05f57-7c4f-4c00-96ce-e8f92338d14d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-49jws\" (UID: \"1bd05f57-7c4f-4c00-96ce-e8f92338d14d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.032040 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74f2108-1c90-40a5-853b-55503384e185-config\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.032209 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08b24fc-253c-44c8-a272-78f4601644b1-serving-cert\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.032230 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f08b24fc-253c-44c8-a272-78f4601644b1-etcd-client\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.032586 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-config\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.035731 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a74f2108-1c90-40a5-853b-55503384e185-machine-approver-tls\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.036071 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e81f063c-6cde-4533-ac88-72044b1c8eef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.039568 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa586ad-300a-4b7d-bb6d-e355b18670d6-serving-cert\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.044928 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ea039d-762a-4d09-a1a3-45d32be0e754-serving-cert\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.050579 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de94b2f3-5852-49ce-81f2-daad119be292-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d6vx6\" (UID: \"de94b2f3-5852-49ce-81f2-daad119be292\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.052770 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1588ab56-bcb9-4baa-bf88-8edca43a7ba5-metrics-tls\") pod \"dns-operator-744455d44c-x6pwq\" (UID: \"1588ab56-bcb9-4baa-bf88-8edca43a7ba5\") " pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.056909 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/928c64fe-ab31-4942-9bdd-64ab8b8339aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.061267 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdj2\" (UniqueName: \"kubernetes.io/projected/1bd05f57-7c4f-4c00-96ce-e8f92338d14d-kube-api-access-wzdj2\") pod \"cluster-samples-operator-665b6dd947-49jws\" (UID: \"1bd05f57-7c4f-4c00-96ce-e8f92338d14d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.072229 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5789585-9fd2-4f7a-9ac7-8ff947f60c2e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-grxxg\" (UID: \"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.072309 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/135975db-0bfc-4fb8-b014-a3e51817c777-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.072576 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-bound-sa-token\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.089747 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2r6r\" (UniqueName: \"kubernetes.io/projected/70ea039d-762a-4d09-a1a3-45d32be0e754-kube-api-access-b2r6r\") pod \"controller-manager-879f6c89f-84x2t\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.117820 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de94b2f3-5852-49ce-81f2-daad119be292-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d6vx6\" (UID: \"de94b2f3-5852-49ce-81f2-daad119be292\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.132972 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blg7x\" (UniqueName: \"kubernetes.io/projected/98536bf1-6e84-479c-b744-648cd081d555-kube-api-access-blg7x\") pod \"catalog-operator-68c6474976-sf6b2\" (UID: \"98536bf1-6e84-479c-b744-648cd081d555\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133023 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133051 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsj8c\" (UniqueName: \"kubernetes.io/projected/ce116733-a153-44bf-838d-7bd7593a3b96-kube-api-access-hsj8c\") pod \"machine-config-controller-84d6567774-9gcsx\" (UID: \"ce116733-a153-44bf-838d-7bd7593a3b96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133067 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d88b1b9a-0913-4310-a020-006c16fd5fff-srv-cert\") pod \"olm-operator-6b444d44fb-h4skc\" (UID: \"d88b1b9a-0913-4310-a020-006c16fd5fff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133084 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc0e3a20-b429-47e1-8100-aa1fae313bf7-stats-auth\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133104 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-drqrf\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133122 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/50968d0b-3fb0-4208-b5df-b1a07c341f0a-metrics-tls\") pod \"dns-default-5g5dr\" (UID: \"50968d0b-3fb0-4208-b5df-b1a07c341f0a\") " pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133167 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c1a6dcc8-df4b-47d7-b871-30b183c83de2-node-bootstrap-token\") pod \"machine-config-server-ggk58\" (UID: \"c1a6dcc8-df4b-47d7-b871-30b183c83de2\") " pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133187 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c1a6dcc8-df4b-47d7-b871-30b183c83de2-certs\") pod \"machine-config-server-ggk58\" (UID: \"c1a6dcc8-df4b-47d7-b871-30b183c83de2\") " pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133202 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc0e3a20-b429-47e1-8100-aa1fae313bf7-metrics-certs\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133216 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc0e3a20-b429-47e1-8100-aa1fae313bf7-default-certificate\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133231 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh282\" (UniqueName: \"kubernetes.io/projected/fc0e3a20-b429-47e1-8100-aa1fae313bf7-kube-api-access-zh282\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133256 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433c2cfb-3f6c-4fe2-9289-237f398dea0b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkzm6\" (UID: \"433c2cfb-3f6c-4fe2-9289-237f398dea0b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133275 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc0e3a20-b429-47e1-8100-aa1fae313bf7-service-ca-bundle\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133293 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/16b7b136-dad4-4347-9941-d97a23fa694c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgjvq\" (UID: \"16b7b136-dad4-4347-9941-d97a23fa694c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133308 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-drqrf\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133328 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t66cq\" (UniqueName: \"kubernetes.io/projected/d88b1b9a-0913-4310-a020-006c16fd5fff-kube-api-access-t66cq\") pod \"olm-operator-6b444d44fb-h4skc\" (UID: \"d88b1b9a-0913-4310-a020-006c16fd5fff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133356 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7595206-8944-4009-bcd7-f9952d225277-config-volume\") pod \"collect-profiles-29497215-j87zm\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133370 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50968d0b-3fb0-4208-b5df-b1a07c341f0a-config-volume\") pod \"dns-default-5g5dr\" (UID: \"50968d0b-3fb0-4208-b5df-b1a07c341f0a\") " pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133387 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39de16fa-35f7-4286-8160-f29fd1059389-config\") pod \"service-ca-operator-777779d784-894fk\" (UID: \"39de16fa-35f7-4286-8160-f29fd1059389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133410 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cml4\" (UniqueName: \"kubernetes.io/projected/d7595206-8944-4009-bcd7-f9952d225277-kube-api-access-8cml4\") pod \"collect-profiles-29497215-j87zm\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133434 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10bb0556-24cb-479a-bbde-e360630fe24a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tvsqj\" (UID: \"10bb0556-24cb-479a-bbde-e360630fe24a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133449 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433c2cfb-3f6c-4fe2-9289-237f398dea0b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkzm6\" (UID: \"433c2cfb-3f6c-4fe2-9289-237f398dea0b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133467 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-registration-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133484 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0a4f1385-7a3c-4195-8b66-1df921d7187b-images\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133513 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d6lp\" (UniqueName: \"kubernetes.io/projected/770df778-b615-43fd-a60d-914f5691e3ac-kube-api-access-7d6lp\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133530 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6fmq\" (UniqueName: \"kubernetes.io/projected/c7be93e4-9ead-466c-90d5-a6b53cc0c1fd-kube-api-access-g6fmq\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p5px\" (UID: \"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133546 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qqfb\" (UniqueName: \"kubernetes.io/projected/19835a65-0362-4aa9-9152-540437cd2d90-kube-api-access-6qqfb\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133562 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-csi-data-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133578 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98536bf1-6e84-479c-b744-648cd081d555-profile-collector-cert\") pod \"catalog-operator-68c6474976-sf6b2\" (UID: \"98536bf1-6e84-479c-b744-648cd081d555\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133594 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-mountpoint-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133610 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd2lk\" (UniqueName: \"kubernetes.io/projected/39de16fa-35f7-4286-8160-f29fd1059389-kube-api-access-pd2lk\") pod \"service-ca-operator-777779d784-894fk\" (UID: \"39de16fa-35f7-4286-8160-f29fd1059389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133628 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce116733-a153-44bf-838d-7bd7593a3b96-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9gcsx\" (UID: \"ce116733-a153-44bf-838d-7bd7593a3b96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133645 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d88b1b9a-0913-4310-a020-006c16fd5fff-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h4skc\" (UID: \"d88b1b9a-0913-4310-a020-006c16fd5fff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133660 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-socket-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133675 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e32d743c-c801-41a1-9f35-3b641554339f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dm9q4\" (UID: \"e32d743c-c801-41a1-9f35-3b641554339f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133690 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgq4x\" (UniqueName: \"kubernetes.io/projected/3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c-kube-api-access-xgq4x\") pod \"ingress-canary-9fzp8\" (UID: \"3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c\") " pod="openshift-ingress-canary/ingress-canary-9fzp8" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.133705 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a4f1385-7a3c-4195-8b66-1df921d7187b-proxy-tls\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134087 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv9r2\" (UniqueName: \"kubernetes.io/projected/50968d0b-3fb0-4208-b5df-b1a07c341f0a-kube-api-access-vv9r2\") pod \"dns-default-5g5dr\" (UID: \"50968d0b-3fb0-4208-b5df-b1a07c341f0a\") " pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134107 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7be93e4-9ead-466c-90d5-a6b53cc0c1fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p5px\" (UID: \"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134125 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crnp2\" (UniqueName: \"kubernetes.io/projected/0f58bb1e-365a-4d56-8b48-5cb0e8e12982-kube-api-access-crnp2\") pod \"migrator-59844c95c7-zlqj2\" (UID: \"0f58bb1e-365a-4d56-8b48-5cb0e8e12982\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134143 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7595206-8944-4009-bcd7-f9952d225277-secret-volume\") pod \"collect-profiles-29497215-j87zm\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134160 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bb0556-24cb-479a-bbde-e360630fe24a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tvsqj\" (UID: \"10bb0556-24cb-479a-bbde-e360630fe24a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134178 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlmm6\" (UniqueName: \"kubernetes.io/projected/10bb0556-24cb-479a-bbde-e360630fe24a-kube-api-access-vlmm6\") pod \"kube-storage-version-migrator-operator-b67b599dd-tvsqj\" (UID: \"10bb0556-24cb-479a-bbde-e360630fe24a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134194 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19835a65-0362-4aa9-9152-540437cd2d90-webhook-cert\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134210 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2ng8\" (UniqueName: \"kubernetes.io/projected/b94cc359-b91d-4058-9b99-daee5cb58497-kube-api-access-c2ng8\") pod \"marketplace-operator-79b997595-drqrf\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134227 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/29ed5258-385e-4835-89ed-03e3c21cc7cb-signing-key\") pod \"service-ca-9c57cc56f-nqffw\" (UID: \"29ed5258-385e-4835-89ed-03e3c21cc7cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134246 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fefdfe52-daca-429b-af44-e9f855996b8e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nh9j9\" (UID: \"fefdfe52-daca-429b-af44-e9f855996b8e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134264 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-plugins-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134284 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmmk\" (UniqueName: \"kubernetes.io/projected/0a4f1385-7a3c-4195-8b66-1df921d7187b-kube-api-access-kwmmk\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134288 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134438 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-mountpoint-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134302 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlrh\" (UniqueName: \"kubernetes.io/projected/fefdfe52-daca-429b-af44-e9f855996b8e-kube-api-access-twlrh\") pod \"package-server-manager-789f6589d5-nh9j9\" (UID: \"fefdfe52-daca-429b-af44-e9f855996b8e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134652 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce116733-a153-44bf-838d-7bd7593a3b96-proxy-tls\") pod \"machine-config-controller-84d6567774-9gcsx\" (UID: \"ce116733-a153-44bf-838d-7bd7593a3b96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134768 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/29ed5258-385e-4835-89ed-03e3c21cc7cb-signing-cabundle\") pod \"service-ca-9c57cc56f-nqffw\" (UID: \"29ed5258-385e-4835-89ed-03e3c21cc7cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134804 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c-cert\") pod \"ingress-canary-9fzp8\" (UID: \"3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c\") " pod="openshift-ingress-canary/ingress-canary-9fzp8" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134837 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsw87\" (UniqueName: \"kubernetes.io/projected/e32d743c-c801-41a1-9f35-3b641554339f-kube-api-access-fsw87\") pod \"multus-admission-controller-857f4d67dd-dm9q4\" (UID: \"e32d743c-c801-41a1-9f35-3b641554339f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134898 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7be93e4-9ead-466c-90d5-a6b53cc0c1fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p5px\" (UID: \"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134947 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-766sg\" (UniqueName: \"kubernetes.io/projected/16b7b136-dad4-4347-9941-d97a23fa694c-kube-api-access-766sg\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgjvq\" (UID: \"16b7b136-dad4-4347-9941-d97a23fa694c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.134976 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39de16fa-35f7-4286-8160-f29fd1059389-serving-cert\") pod \"service-ca-operator-777779d784-894fk\" (UID: \"39de16fa-35f7-4286-8160-f29fd1059389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.135011 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/433c2cfb-3f6c-4fe2-9289-237f398dea0b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkzm6\" (UID: \"433c2cfb-3f6c-4fe2-9289-237f398dea0b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.135036 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19835a65-0362-4aa9-9152-540437cd2d90-apiservice-cert\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.135057 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a4f1385-7a3c-4195-8b66-1df921d7187b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.135109 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbkdk\" (UniqueName: \"kubernetes.io/projected/c1a6dcc8-df4b-47d7-b871-30b183c83de2-kube-api-access-jbkdk\") pod \"machine-config-server-ggk58\" (UID: \"c1a6dcc8-df4b-47d7-b871-30b183c83de2\") " pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.135142 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/19835a65-0362-4aa9-9152-540437cd2d90-tmpfs\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.135175 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98536bf1-6e84-479c-b744-648cd081d555-srv-cert\") pod \"catalog-operator-68c6474976-sf6b2\" (UID: \"98536bf1-6e84-479c-b744-648cd081d555\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.135206 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9655t\" (UniqueName: \"kubernetes.io/projected/29ed5258-385e-4835-89ed-03e3c21cc7cb-kube-api-access-9655t\") pod \"service-ca-9c57cc56f-nqffw\" (UID: \"29ed5258-385e-4835-89ed-03e3c21cc7cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.135370 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:23.635343305 +0000 UTC m=+142.444572179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.135598 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-registration-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.135751 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50968d0b-3fb0-4208-b5df-b1a07c341f0a-config-volume\") pod \"dns-default-5g5dr\" (UID: \"50968d0b-3fb0-4208-b5df-b1a07c341f0a\") " pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.136454 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce116733-a153-44bf-838d-7bd7593a3b96-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9gcsx\" (UID: \"ce116733-a153-44bf-838d-7bd7593a3b96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.136453 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0a4f1385-7a3c-4195-8b66-1df921d7187b-images\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.137232 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-drqrf\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.137447 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/29ed5258-385e-4835-89ed-03e3c21cc7cb-signing-cabundle\") pod \"service-ca-9c57cc56f-nqffw\" (UID: \"29ed5258-385e-4835-89ed-03e3c21cc7cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.137645 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc0e3a20-b429-47e1-8100-aa1fae313bf7-service-ca-bundle\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.137702 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39de16fa-35f7-4286-8160-f29fd1059389-config\") pod \"service-ca-operator-777779d784-894fk\" (UID: \"39de16fa-35f7-4286-8160-f29fd1059389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.138301 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433c2cfb-3f6c-4fe2-9289-237f398dea0b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkzm6\" (UID: \"433c2cfb-3f6c-4fe2-9289-237f398dea0b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.138664 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7595206-8944-4009-bcd7-f9952d225277-config-volume\") pod \"collect-profiles-29497215-j87zm\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.137902 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-csi-data-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.139441 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/19835a65-0362-4aa9-9152-540437cd2d90-tmpfs\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.139670 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bb0556-24cb-479a-bbde-e360630fe24a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tvsqj\" (UID: \"10bb0556-24cb-479a-bbde-e360630fe24a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.139205 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a4f1385-7a3c-4195-8b66-1df921d7187b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.140306 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7be93e4-9ead-466c-90d5-a6b53cc0c1fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p5px\" (UID: \"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.140778 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-socket-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.140913 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/770df778-b615-43fd-a60d-914f5691e3ac-plugins-dir\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.144001 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7spv\" (UniqueName: \"kubernetes.io/projected/135975db-0bfc-4fb8-b014-a3e51817c777-kube-api-access-f7spv\") pod \"ingress-operator-5b745b69d9-4w7p4\" (UID: \"135975db-0bfc-4fb8-b014-a3e51817c777\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.146581 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98536bf1-6e84-479c-b744-648cd081d555-srv-cert\") pod \"catalog-operator-68c6474976-sf6b2\" (UID: \"98536bf1-6e84-479c-b744-648cd081d555\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.146690 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19835a65-0362-4aa9-9152-540437cd2d90-apiservice-cert\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.147287 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc0e3a20-b429-47e1-8100-aa1fae313bf7-stats-auth\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.148199 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10bb0556-24cb-479a-bbde-e360630fe24a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tvsqj\" (UID: \"10bb0556-24cb-479a-bbde-e360630fe24a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.148348 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc0e3a20-b429-47e1-8100-aa1fae313bf7-default-certificate\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.148361 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98536bf1-6e84-479c-b744-648cd081d555-profile-collector-cert\") pod \"catalog-operator-68c6474976-sf6b2\" (UID: \"98536bf1-6e84-479c-b744-648cd081d555\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.148401 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c1a6dcc8-df4b-47d7-b871-30b183c83de2-certs\") pod \"machine-config-server-ggk58\" (UID: \"c1a6dcc8-df4b-47d7-b871-30b183c83de2\") " pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.148404 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d88b1b9a-0913-4310-a020-006c16fd5fff-srv-cert\") pod \"olm-operator-6b444d44fb-h4skc\" (UID: \"d88b1b9a-0913-4310-a020-006c16fd5fff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.152235 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/50968d0b-3fb0-4208-b5df-b1a07c341f0a-metrics-tls\") pod \"dns-default-5g5dr\" (UID: \"50968d0b-3fb0-4208-b5df-b1a07c341f0a\") " pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.152286 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7595206-8944-4009-bcd7-f9952d225277-secret-volume\") pod \"collect-profiles-29497215-j87zm\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.152290 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fefdfe52-daca-429b-af44-e9f855996b8e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nh9j9\" (UID: \"fefdfe52-daca-429b-af44-e9f855996b8e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.152346 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l477s\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-kube-api-access-l477s\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.152477 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce116733-a153-44bf-838d-7bd7593a3b96-proxy-tls\") pod \"machine-config-controller-84d6567774-9gcsx\" (UID: \"ce116733-a153-44bf-838d-7bd7593a3b96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.152949 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d88b1b9a-0913-4310-a020-006c16fd5fff-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h4skc\" (UID: \"d88b1b9a-0913-4310-a020-006c16fd5fff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.153019 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39de16fa-35f7-4286-8160-f29fd1059389-serving-cert\") pod \"service-ca-operator-777779d784-894fk\" (UID: \"39de16fa-35f7-4286-8160-f29fd1059389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.153117 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c1a6dcc8-df4b-47d7-b871-30b183c83de2-node-bootstrap-token\") pod \"machine-config-server-ggk58\" (UID: \"c1a6dcc8-df4b-47d7-b871-30b183c83de2\") " pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.153260 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/29ed5258-385e-4835-89ed-03e3c21cc7cb-signing-key\") pod \"service-ca-9c57cc56f-nqffw\" (UID: \"29ed5258-385e-4835-89ed-03e3c21cc7cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.154020 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19835a65-0362-4aa9-9152-540437cd2d90-webhook-cert\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.154638 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/16b7b136-dad4-4347-9941-d97a23fa694c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgjvq\" (UID: \"16b7b136-dad4-4347-9941-d97a23fa694c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.155241 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a4f1385-7a3c-4195-8b66-1df921d7187b-proxy-tls\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.155307 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-drqrf\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.155517 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e32d743c-c801-41a1-9f35-3b641554339f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dm9q4\" (UID: \"e32d743c-c801-41a1-9f35-3b641554339f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.155680 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7be93e4-9ead-466c-90d5-a6b53cc0c1fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p5px\" (UID: \"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.155907 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttr4l\" (UniqueName: \"kubernetes.io/projected/a74f2108-1c90-40a5-853b-55503384e185-kube-api-access-ttr4l\") pod \"machine-approver-56656f9798-gctwh\" (UID: \"a74f2108-1c90-40a5-853b-55503384e185\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.157357 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c-cert\") pod \"ingress-canary-9fzp8\" (UID: \"3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c\") " pod="openshift-ingress-canary/ingress-canary-9fzp8" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.158171 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc0e3a20-b429-47e1-8100-aa1fae313bf7-metrics-certs\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.158236 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433c2cfb-3f6c-4fe2-9289-237f398dea0b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkzm6\" (UID: \"433c2cfb-3f6c-4fe2-9289-237f398dea0b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.200355 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrpf9\" (UniqueName: \"kubernetes.io/projected/07927adb-887c-4259-91ca-b46c8f9809e4-kube-api-access-mrpf9\") pod \"openshift-apiserver-operator-796bbdcf4f-7pgt9\" (UID: \"07927adb-887c-4259-91ca-b46c8f9809e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.212647 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7n4t\" (UniqueName: \"kubernetes.io/projected/1588ab56-bcb9-4baa-bf88-8edca43a7ba5-kube-api-access-d7n4t\") pod \"dns-operator-744455d44c-x6pwq\" (UID: \"1588ab56-bcb9-4baa-bf88-8edca43a7ba5\") " pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.235219 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5789585-9fd2-4f7a-9ac7-8ff947f60c2e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-grxxg\" (UID: \"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.237553 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.238067 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:23.73804919 +0000 UTC m=+142.547278064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.257352 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6bbp\" (UniqueName: \"kubernetes.io/projected/fd126257-f24f-4597-8e75-e6d1c24e8709-kube-api-access-v6bbp\") pod \"downloads-7954f5f757-jvt8b\" (UID: \"fd126257-f24f-4597-8e75-e6d1c24e8709\") " pod="openshift-console/downloads-7954f5f757-jvt8b" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.271631 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.275163 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5p2zv"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.279361 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc75l\" (UniqueName: \"kubernetes.io/projected/faa586ad-300a-4b7d-bb6d-e355b18670d6-kube-api-access-fc75l\") pod \"console-operator-58897d9998-bmg8v\" (UID: \"faa586ad-300a-4b7d-bb6d-e355b18670d6\") " pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.293118 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdpf5\" (UniqueName: \"kubernetes.io/projected/f08b24fc-253c-44c8-a272-78f4601644b1-kube-api-access-bdpf5\") pod \"etcd-operator-b45778765-fd6j6\" (UID: \"f08b24fc-253c-44c8-a272-78f4601644b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.317080 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.321415 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.330284 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.335272 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e81f063c-6cde-4533-ac88-72044b1c8eef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.337659 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.347211 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.347575 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.348132 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:23.848082971 +0000 UTC m=+142.657311845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.348382 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf9dc\" (UniqueName: \"kubernetes.io/projected/e81f063c-6cde-4533-ac88-72044b1c8eef-kube-api-access-gf9dc\") pod \"cluster-image-registry-operator-dc59b4c8b-pvr69\" (UID: \"e81f063c-6cde-4533-ac88-72044b1c8eef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.353401 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlrh\" (UniqueName: \"kubernetes.io/projected/fefdfe52-daca-429b-af44-e9f855996b8e-kube-api-access-twlrh\") pod \"package-server-manager-789f6589d5-nh9j9\" (UID: \"fefdfe52-daca-429b-af44-e9f855996b8e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.361779 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.374340 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd2lk\" (UniqueName: \"kubernetes.io/projected/39de16fa-35f7-4286-8160-f29fd1059389-kube-api-access-pd2lk\") pod \"service-ca-operator-777779d784-894fk\" (UID: \"39de16fa-35f7-4286-8160-f29fd1059389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.400478 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t66cq\" (UniqueName: \"kubernetes.io/projected/d88b1b9a-0913-4310-a020-006c16fd5fff-kube-api-access-t66cq\") pod \"olm-operator-6b444d44fb-h4skc\" (UID: \"d88b1b9a-0913-4310-a020-006c16fd5fff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.402286 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.415214 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9655t\" (UniqueName: \"kubernetes.io/projected/29ed5258-385e-4835-89ed-03e3c21cc7cb-kube-api-access-9655t\") pod \"service-ca-9c57cc56f-nqffw\" (UID: \"29ed5258-385e-4835-89ed-03e3c21cc7cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.415252 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.420542 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvt8b" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.428684 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.432402 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cml4\" (UniqueName: \"kubernetes.io/projected/d7595206-8944-4009-bcd7-f9952d225277-kube-api-access-8cml4\") pod \"collect-profiles-29497215-j87zm\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.449293 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.449492 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:23.949454853 +0000 UTC m=+142.758683737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.449869 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.450306 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:23.95029013 +0000 UTC m=+142.759519004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.455591 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blg7x\" (UniqueName: \"kubernetes.io/projected/98536bf1-6e84-479c-b744-648cd081d555-kube-api-access-blg7x\") pod \"catalog-operator-68c6474976-sf6b2\" (UID: \"98536bf1-6e84-479c-b744-648cd081d555\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.472532 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d6lp\" (UniqueName: \"kubernetes.io/projected/770df778-b615-43fd-a60d-914f5691e3ac-kube-api-access-7d6lp\") pod \"csi-hostpathplugin-q7slc\" (UID: \"770df778-b615-43fd-a60d-914f5691e3ac\") " pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.489525 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6fmq\" (UniqueName: \"kubernetes.io/projected/c7be93e4-9ead-466c-90d5-a6b53cc0c1fd-kube-api-access-g6fmq\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p5px\" (UID: \"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.499601 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.506108 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.514149 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.518096 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qqfb\" (UniqueName: \"kubernetes.io/projected/19835a65-0362-4aa9-9152-540437cd2d90-kube-api-access-6qqfb\") pod \"packageserver-d55dfcdfc-qvwbq\" (UID: \"19835a65-0362-4aa9-9152-540437cd2d90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.519560 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.529380 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.534024 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh282\" (UniqueName: \"kubernetes.io/projected/fc0e3a20-b429-47e1-8100-aa1fae313bf7-kube-api-access-zh282\") pod \"router-default-5444994796-s5dt7\" (UID: \"fc0e3a20-b429-47e1-8100-aa1fae313bf7\") " pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.542676 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.552057 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.563649 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bmrsg"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.565831 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/433c2cfb-3f6c-4fe2-9289-237f398dea0b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkzm6\" (UID: \"433c2cfb-3f6c-4fe2-9289-237f398dea0b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.578107 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.579667 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.580263 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.080243432 +0000 UTC m=+142.889472306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.581437 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsj8c\" (UniqueName: \"kubernetes.io/projected/ce116733-a153-44bf-838d-7bd7593a3b96-kube-api-access-hsj8c\") pod \"machine-config-controller-84d6567774-9gcsx\" (UID: \"ce116733-a153-44bf-838d-7bd7593a3b96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.595025 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q7slc" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.600343 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbkdk\" (UniqueName: \"kubernetes.io/projected/c1a6dcc8-df4b-47d7-b871-30b183c83de2-kube-api-access-jbkdk\") pod \"machine-config-server-ggk58\" (UID: \"c1a6dcc8-df4b-47d7-b871-30b183c83de2\") " pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.606962 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ggk58" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.626599 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2ng8\" (UniqueName: \"kubernetes.io/projected/b94cc359-b91d-4058-9b99-daee5cb58497-kube-api-access-c2ng8\") pod \"marketplace-operator-79b997595-drqrf\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.637911 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmmk\" (UniqueName: \"kubernetes.io/projected/0a4f1385-7a3c-4195-8b66-1df921d7187b-kube-api-access-kwmmk\") pod \"machine-config-operator-74547568cd-78z2q\" (UID: \"0a4f1385-7a3c-4195-8b66-1df921d7187b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.637975 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7z4zh"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.642995 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.643741 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bsjw7"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.652111 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.657280 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgq4x\" (UniqueName: \"kubernetes.io/projected/3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c-kube-api-access-xgq4x\") pod \"ingress-canary-9fzp8\" (UID: \"3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c\") " pod="openshift-ingress-canary/ingress-canary-9fzp8" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.689179 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.689672 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.189660263 +0000 UTC m=+142.998889137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.690543 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" event={"ID":"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd","Type":"ContainerStarted","Data":"4d67bd9342c84fb1ad92565804f20464bd57f2ca9134459d1dd179e2d44d5fed"} Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.690581 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" event={"ID":"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd","Type":"ContainerStarted","Data":"3fc6ebf27167c3df9612877427a2c1879fc8aa95bab56dd402c3d1bbf423893b"} Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.693773 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsw87\" (UniqueName: \"kubernetes.io/projected/e32d743c-c801-41a1-9f35-3b641554339f-kube-api-access-fsw87\") pod \"multus-admission-controller-857f4d67dd-dm9q4\" (UID: \"e32d743c-c801-41a1-9f35-3b641554339f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.705191 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x6pwq"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.708642 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.711771 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crnp2\" (UniqueName: \"kubernetes.io/projected/0f58bb1e-365a-4d56-8b48-5cb0e8e12982-kube-api-access-crnp2\") pod \"migrator-59844c95c7-zlqj2\" (UID: \"0f58bb1e-365a-4d56-8b48-5cb0e8e12982\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.714710 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" event={"ID":"1acbd6a9-2643-4c2c-9a20-4da63545ac23","Type":"ContainerStarted","Data":"74db990ab8aff953fd145b9e7eb905dc1d0a60acb180e6e32bb10e0825a0cd70"} Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.721281 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8hh7p"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.721503 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-766sg\" (UniqueName: \"kubernetes.io/projected/16b7b136-dad4-4347-9941-d97a23fa694c-kube-api-access-766sg\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgjvq\" (UID: \"16b7b136-dad4-4347-9941-d97a23fa694c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.722701 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" event={"ID":"a74f2108-1c90-40a5-853b-55503384e185","Type":"ContainerStarted","Data":"31d9bb4645a0580556a6ab306f4bcb1dc5bca5b3e28be084498e9900ab71830f"} Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.730159 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bmg8v"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.744473 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.747480 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" event={"ID":"7f411a06-d760-4d52-8939-36856b6813ad","Type":"ContainerStarted","Data":"7dde8981f8e7999e61959545da0f4d3c5557b5603959a997e9532c4c1e5c30fa"} Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.747539 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" event={"ID":"7f411a06-d760-4d52-8939-36856b6813ad","Type":"ContainerStarted","Data":"e3ca8a95e5a4d5d63f63ef64cd03cc0606c6cdeb464104153a179241d3a5c190"} Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.751577 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.758970 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlmm6\" (UniqueName: \"kubernetes.io/projected/10bb0556-24cb-479a-bbde-e360630fe24a-kube-api-access-vlmm6\") pod \"kube-storage-version-migrator-operator-b67b599dd-tvsqj\" (UID: \"10bb0556-24cb-479a-bbde-e360630fe24a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.760083 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.766069 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv9r2\" (UniqueName: \"kubernetes.io/projected/50968d0b-3fb0-4208-b5df-b1a07c341f0a-kube-api-access-vv9r2\") pod \"dns-default-5g5dr\" (UID: \"50968d0b-3fb0-4208-b5df-b1a07c341f0a\") " pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.770173 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.776923 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.785287 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.790101 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.792265 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.292248094 +0000 UTC m=+143.101476968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.793387 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.838950 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.850205 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg"] Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.870772 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.878846 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.899968 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:23 crc kubenswrapper[4931]: E0131 04:26:23.900430 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.400415225 +0000 UTC m=+143.209644099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.900772 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:23 crc kubenswrapper[4931]: I0131 04:26:23.917355 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9fzp8" Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.001939 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.002155 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.502122448 +0000 UTC m=+143.311351322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.002217 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.002532 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.502518981 +0000 UTC m=+143.311747855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.021282 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.021333 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-84x2t"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.103544 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.103990 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.603974036 +0000 UTC m=+143.413202900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.128607 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fd6j6"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.131269 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-894fk"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.152432 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.182605 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvt8b"] Jan 31 04:26:24 crc kubenswrapper[4931]: W0131 04:26:24.195032 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ea039d_762a_4d09_a1a3_45d32be0e754.slice/crio-5939c6696985b2711bf2f7863239bc74efa5aa10d79d84da6424d41ed683d537 WatchSource:0}: Error finding container 5939c6696985b2711bf2f7863239bc74efa5aa10d79d84da6424d41ed683d537: Status 404 returned error can't find the container with id 5939c6696985b2711bf2f7863239bc74efa5aa10d79d84da6424d41ed683d537 Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.206704 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.207108 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.707092484 +0000 UTC m=+143.516321358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.307210 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.307377 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.80733668 +0000 UTC m=+143.616565554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.307949 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.308234 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.808220118 +0000 UTC m=+143.617448992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.316549 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.348617 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.408938 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.410087 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:24.910060015 +0000 UTC m=+143.719288889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.440378 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.446603 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.455560 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nqffw"] Jan 31 04:26:24 crc kubenswrapper[4931]: W0131 04:26:24.466846 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19835a65_0362_4aa9_9152_540437cd2d90.slice/crio-f2a266e877fddb685471fe32d59efe436389cd358c69d815e6e9d9ad47473496 WatchSource:0}: Error finding container f2a266e877fddb685471fe32d59efe436389cd358c69d815e6e9d9ad47473496: Status 404 returned error can't find the container with id f2a266e877fddb685471fe32d59efe436389cd358c69d815e6e9d9ad47473496 Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.473786 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.474668 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.517103 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.517457 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.017445801 +0000 UTC m=+143.826674675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.542101 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" podStartSLOduration=122.542076334 podStartE2EDuration="2m2.542076334s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:24.535181272 +0000 UTC m=+143.344410146" watchObservedRunningTime="2026-01-31 04:26:24.542076334 +0000 UTC m=+143.351305208" Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.545879 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.569832 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69"] Jan 31 04:26:24 crc kubenswrapper[4931]: W0131 04:26:24.572445 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29ed5258_385e_4835_89ed_03e3c21cc7cb.slice/crio-493a84220b63840ea13560a3801cdec4490547d93b84444e06e974367050af84 WatchSource:0}: Error finding container 493a84220b63840ea13560a3801cdec4490547d93b84444e06e974367050af84: Status 404 returned error can't find the container with id 493a84220b63840ea13560a3801cdec4490547d93b84444e06e974367050af84 Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.596924 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q"] Jan 31 04:26:24 crc kubenswrapper[4931]: W0131 04:26:24.609049 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98536bf1_6e84_479c_b744_648cd081d555.slice/crio-fe7a96c9b5ecf455b8c66c75b6fbdbda738cfd0ce970d55cf81a9431718bc0f6 WatchSource:0}: Error finding container fe7a96c9b5ecf455b8c66c75b6fbdbda738cfd0ce970d55cf81a9431718bc0f6: Status 404 returned error can't find the container with id fe7a96c9b5ecf455b8c66c75b6fbdbda738cfd0ce970d55cf81a9431718bc0f6 Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.628684 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.628978 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.128939899 +0000 UTC m=+143.938168773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.631195 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.632539 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.132514214 +0000 UTC m=+143.941743098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: W0131 04:26:24.649875 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7595206_8944_4009_bcd7_f9952d225277.slice/crio-57be4eabfa3f847a70ca8145f450baae6b7336490e14fab061ca5c014b236b8e WatchSource:0}: Error finding container 57be4eabfa3f847a70ca8145f450baae6b7336490e14fab061ca5c014b236b8e: Status 404 returned error can't find the container with id 57be4eabfa3f847a70ca8145f450baae6b7336490e14fab061ca5c014b236b8e Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.682685 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q7slc"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.693700 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.733798 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.734316 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.234288049 +0000 UTC m=+144.043516923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: W0131 04:26:24.742446 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7be93e4_9ead_466c_90d5_a6b53cc0c1fd.slice/crio-d827f556e1a661793a9b2dd7c915c4497a0e48e234b7b108dc0e8e85ae8d700f WatchSource:0}: Error finding container d827f556e1a661793a9b2dd7c915c4497a0e48e234b7b108dc0e8e85ae8d700f: Status 404 returned error can't find the container with id d827f556e1a661793a9b2dd7c915c4497a0e48e234b7b108dc0e8e85ae8d700f Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.774399 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" event={"ID":"a74f2108-1c90-40a5-853b-55503384e185","Type":"ContainerStarted","Data":"75c7d0d8a35d13542c34cceda2995459b29a771b6bab8ed67c63a204f869f71b"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.778586 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.789125 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" event={"ID":"fefdfe52-daca-429b-af44-e9f855996b8e","Type":"ContainerStarted","Data":"0ad7055821af4e1f5484e6370158c8236a4c8426eec560c1783c4d9742eb671c"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.789477 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.791479 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s5dt7" event={"ID":"fc0e3a20-b429-47e1-8100-aa1fae313bf7","Type":"ContainerStarted","Data":"07e7e4a903d350cff66aa34478c710b78fc5114d5caa9334e36671d96af82628"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.796110 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" event={"ID":"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea","Type":"ContainerStarted","Data":"9f1fd1fe121bd5cfadc1cb1503a2ccd6d24f64c4e23bc1673034f02f4b08adf5"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.804087 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" event={"ID":"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd","Type":"ContainerStarted","Data":"d827f556e1a661793a9b2dd7c915c4497a0e48e234b7b108dc0e8e85ae8d700f"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.810856 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvt8b" event={"ID":"fd126257-f24f-4597-8e75-e6d1c24e8709","Type":"ContainerStarted","Data":"8c048bdcd5651b307ff8010d9fa94213319fd16d08d47a749dbbec7d8adbe75a"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.819524 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" event={"ID":"34be6968-eb64-46a3-9e5f-f5568d764d8d","Type":"ContainerStarted","Data":"4e9b25fa9b8ca9a911b8e5cdd201f9d3c1f57422ba4197604fa827a715e6aaef"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.832127 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" event={"ID":"07927adb-887c-4259-91ca-b46c8f9809e4","Type":"ContainerStarted","Data":"8bfc5bc804b080e1f18b2210d03c50cdfb9c258f1ccec4bcd8043a4e49c646c9"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.834550 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" event={"ID":"135975db-0bfc-4fb8-b014-a3e51817c777","Type":"ContainerStarted","Data":"b762a323ff02202cdf3ee837e71c8838900b94abebf5e19051d5ca2e196a7c65"} Jan 31 04:26:24 crc kubenswrapper[4931]: W0131 04:26:24.834840 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce116733_a153_44bf_838d_7bd7593a3b96.slice/crio-dd6f5954ac69e9397ce457ed4e4081f80d6927981a11730af06cf26c21f52b27 WatchSource:0}: Error finding container dd6f5954ac69e9397ce457ed4e4081f80d6927981a11730af06cf26c21f52b27: Status 404 returned error can't find the container with id dd6f5954ac69e9397ce457ed4e4081f80d6927981a11730af06cf26c21f52b27 Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.835162 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.835605 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.335592949 +0000 UTC m=+144.144821813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.840094 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" event={"ID":"29ed5258-385e-4835-89ed-03e3c21cc7cb","Type":"ContainerStarted","Data":"493a84220b63840ea13560a3801cdec4490547d93b84444e06e974367050af84"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.849885 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dm9q4"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.855197 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" podStartSLOduration=122.855166779 podStartE2EDuration="2m2.855166779s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:24.853366581 +0000 UTC m=+143.662595455" watchObservedRunningTime="2026-01-31 04:26:24.855166779 +0000 UTC m=+143.664395643" Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.881852 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" event={"ID":"d7595206-8944-4009-bcd7-f9952d225277","Type":"ContainerStarted","Data":"57be4eabfa3f847a70ca8145f450baae6b7336490e14fab061ca5c014b236b8e"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.891341 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" event={"ID":"433c2cfb-3f6c-4fe2-9289-237f398dea0b","Type":"ContainerStarted","Data":"5606dd72969d3f4ff652836efa6ad202a31a7983ed9bd0d4a580541b49ce2dd5"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.895389 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8hh7p" event={"ID":"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048","Type":"ContainerStarted","Data":"354a1527514c699e0b3a187f8995eee8f9b2288469d59257ce70dc24d8696f2d"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.895414 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8hh7p" event={"ID":"d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048","Type":"ContainerStarted","Data":"dfb0a14140b189f2116e48efb07393f0db91f58087ce568a93e29c9777a15800"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.912878 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2"] Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.936438 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.936569 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.436543557 +0000 UTC m=+144.245772431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.937010 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:24 crc kubenswrapper[4931]: E0131 04:26:24.937348 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.437334183 +0000 UTC m=+144.246563057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.958782 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" event={"ID":"98536bf1-6e84-479c-b744-648cd081d555","Type":"ContainerStarted","Data":"fe7a96c9b5ecf455b8c66c75b6fbdbda738cfd0ce970d55cf81a9431718bc0f6"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.962988 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" event={"ID":"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e","Type":"ContainerStarted","Data":"646a624695e57bad62e5e4dec00a071d3ec9573ddc4daa5c90bd8f1cbef954a0"} Jan 31 04:26:24 crc kubenswrapper[4931]: I0131 04:26:24.970902 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" event={"ID":"1588ab56-bcb9-4baa-bf88-8edca43a7ba5","Type":"ContainerStarted","Data":"efa81594262b373210325013f20aa78d0fe7c13466fa2af3b45ecd9168b75602"} Jan 31 04:26:24 crc kubenswrapper[4931]: W0131 04:26:24.995755 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f58bb1e_365a_4d56_8b48_5cb0e8e12982.slice/crio-3af69f4a2ea8cf2695d033e8bcb5c5e6e8f82a0a25c62b682c28189d9cb0aca3 WatchSource:0}: Error finding container 3af69f4a2ea8cf2695d033e8bcb5c5e6e8f82a0a25c62b682c28189d9cb0aca3: Status 404 returned error can't find the container with id 3af69f4a2ea8cf2695d033e8bcb5c5e6e8f82a0a25c62b682c28189d9cb0aca3 Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.006203 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" event={"ID":"19835a65-0362-4aa9-9152-540437cd2d90","Type":"ContainerStarted","Data":"f2a266e877fddb685471fe32d59efe436389cd358c69d815e6e9d9ad47473496"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.038538 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.040422 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.540395489 +0000 UTC m=+144.349624363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.041861 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ggk58" event={"ID":"c1a6dcc8-df4b-47d7-b871-30b183c83de2","Type":"ContainerStarted","Data":"78f160a8e99623a9c9675a0bdf4d6935aef8f28250fe39385b9cbab3e12751a5"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.041915 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ggk58" event={"ID":"c1a6dcc8-df4b-47d7-b871-30b183c83de2","Type":"ContainerStarted","Data":"52655ec88a7114749976052100369f683138da46423411837634ef8d046f69e6"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.046963 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" event={"ID":"39de16fa-35f7-4286-8160-f29fd1059389","Type":"ContainerStarted","Data":"be9651b57675230e9597b04f569a532c0ae75db10af14149836c62b94c8d80da"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.054787 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" event={"ID":"d88b1b9a-0913-4310-a020-006c16fd5fff","Type":"ContainerStarted","Data":"a473d9ab0fbb6bc5fef789e232f8735c898eddda0fd41b1bfa6aa36c6d5cd0a1"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.067254 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" event={"ID":"7f411a06-d760-4d52-8939-36856b6813ad","Type":"ContainerStarted","Data":"96e7bcb3d78e878e205b7f33c677cf78659cbab55c411a562bd897de31b88a13"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.077251 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" event={"ID":"f08b24fc-253c-44c8-a272-78f4601644b1","Type":"ContainerStarted","Data":"324013d6723e756b363e6b67a8a25509f4e0ee6c9277b1b9d0f6ac58aa5a134e"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.078844 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drqrf"] Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.084295 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" event={"ID":"70ea039d-762a-4d09-a1a3-45d32be0e754","Type":"ContainerStarted","Data":"5939c6696985b2711bf2f7863239bc74efa5aa10d79d84da6424d41ed683d537"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.091860 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" event={"ID":"de94b2f3-5852-49ce-81f2-daad119be292","Type":"ContainerStarted","Data":"20cf56cae2d37acd1d6a1a00f48cba6a4b5cccd67679467739378ddc393878f3"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.098522 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" event={"ID":"fa274e63-34e4-461b-bd7a-270fc7bba034","Type":"ContainerStarted","Data":"d45c8ab28c0d748106c0f9503ca3725cea5ecfcf93f46772e4bdb3503b671d76"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.098584 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" event={"ID":"fa274e63-34e4-461b-bd7a-270fc7bba034","Type":"ContainerStarted","Data":"5cfc433c6030f6666e8e98d16e6b3c533235b1a93c80b975a95e437e3cc1b32f"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.102193 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bmg8v" event={"ID":"faa586ad-300a-4b7d-bb6d-e355b18670d6","Type":"ContainerStarted","Data":"b0724c4ebfdb1ced70b8ec1ab7eae43d611923c8e95a64ece1c13c922009792f"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.102220 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bmg8v" event={"ID":"faa586ad-300a-4b7d-bb6d-e355b18670d6","Type":"ContainerStarted","Data":"939744f8abd6ea98039f904f7b6ba9b62eb6a78719e87b51d2650a78dd931919"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.102525 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.108852 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" event={"ID":"1bd05f57-7c4f-4c00-96ce-e8f92338d14d","Type":"ContainerStarted","Data":"068777c26ff127149a0c1d2079f9ec2015c73cf031b76fee5b3f08648f6dc8cf"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.108917 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" event={"ID":"1bd05f57-7c4f-4c00-96ce-e8f92338d14d","Type":"ContainerStarted","Data":"5c88bcd772de0fb243524bffc3d699de194b0a9038bf3afa1e8d515a955269a5"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.115050 4931 patch_prober.go:28] interesting pod/console-operator-58897d9998-bmg8v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.115128 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bmg8v" podUID="faa586ad-300a-4b7d-bb6d-e355b18670d6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.117861 4931 generic.go:334] "Generic (PLEG): container finished" podID="13d327eb-39b7-4a67-8a2f-a372ccbbd5de" containerID="ea61ef97a16a01fd7341af92167b0695de43cb905703b8dda92a901e69128e51" exitCode=0 Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.119286 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" event={"ID":"13d327eb-39b7-4a67-8a2f-a372ccbbd5de","Type":"ContainerDied","Data":"ea61ef97a16a01fd7341af92167b0695de43cb905703b8dda92a901e69128e51"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.119350 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.119363 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" event={"ID":"13d327eb-39b7-4a67-8a2f-a372ccbbd5de","Type":"ContainerStarted","Data":"a452385a1563016644594c1ba104e989c75407621e44e031c6863985c109e929"} Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.136554 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.141666 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.142247 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.642224286 +0000 UTC m=+144.451453160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.215704 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5g5dr"] Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.229592 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9fzp8"] Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.242317 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.242699 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.742679619 +0000 UTC m=+144.551908493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.243662 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.244505 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.744485267 +0000 UTC m=+144.553714351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: W0131 04:26:25.333040 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50968d0b_3fb0_4208_b5df_b1a07c341f0a.slice/crio-13e9aa77ac90afe6acdf59a0a439907eed3bf9aaec56313c01bac2bd318633e9 WatchSource:0}: Error finding container 13e9aa77ac90afe6acdf59a0a439907eed3bf9aaec56313c01bac2bd318633e9: Status 404 returned error can't find the container with id 13e9aa77ac90afe6acdf59a0a439907eed3bf9aaec56313c01bac2bd318633e9 Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.345382 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.345529 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.845502177 +0000 UTC m=+144.654731051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.345769 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.346079 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.846072555 +0000 UTC m=+144.655301429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.416572 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7z4zh" podStartSLOduration=124.416553983 podStartE2EDuration="2m4.416553983s" podCreationTimestamp="2026-01-31 04:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:25.408290747 +0000 UTC m=+144.217519631" watchObservedRunningTime="2026-01-31 04:26:25.416553983 +0000 UTC m=+144.225782857" Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.452331 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.452530 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.95250725 +0000 UTC m=+144.761736124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.452605 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.453130 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:25.95312171 +0000 UTC m=+144.762350584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.453492 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ggk58" podStartSLOduration=5.453470191 podStartE2EDuration="5.453470191s" podCreationTimestamp="2026-01-31 04:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:25.45343372 +0000 UTC m=+144.262662584" watchObservedRunningTime="2026-01-31 04:26:25.453470191 +0000 UTC m=+144.262699065" Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.560132 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.560271 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.060250267 +0000 UTC m=+144.869479141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.560988 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.561378 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.061365753 +0000 UTC m=+144.870594627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.662297 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.662514 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.162496488 +0000 UTC m=+144.971725362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.664116 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.667003 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.166968431 +0000 UTC m=+144.976197305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.686774 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8hh7p" podStartSLOduration=123.686743338 podStartE2EDuration="2m3.686743338s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:25.685812338 +0000 UTC m=+144.495041212" watchObservedRunningTime="2026-01-31 04:26:25.686743338 +0000 UTC m=+144.495972202" Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.715089 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5p2zv" podStartSLOduration=123.715072519 podStartE2EDuration="2m3.715072519s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:25.714280344 +0000 UTC m=+144.523509218" watchObservedRunningTime="2026-01-31 04:26:25.715072519 +0000 UTC m=+144.524301393" Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.767187 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.767912 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.267883819 +0000 UTC m=+145.077112693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.869309 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.869861 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.36984774 +0000 UTC m=+145.179076614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:25 crc kubenswrapper[4931]: I0131 04:26:25.970346 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:25 crc kubenswrapper[4931]: E0131 04:26:25.970895 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.470879651 +0000 UTC m=+145.280108525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.010893 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bmg8v" podStartSLOduration=124.010871318 podStartE2EDuration="2m4.010871318s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:25.978963941 +0000 UTC m=+144.788192815" watchObservedRunningTime="2026-01-31 04:26:26.010871318 +0000 UTC m=+144.820100182" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.071853 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.072213 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.572200332 +0000 UTC m=+145.381429206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.172512 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.173147 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.67312891 +0000 UTC m=+145.482357784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.177707 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q7slc" event={"ID":"770df778-b615-43fd-a60d-914f5691e3ac","Type":"ContainerStarted","Data":"9ccac9cb81579d9482000c9c5591b6274b48c661608f9f52a32c6558a4f89cd0"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.246271 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" event={"ID":"34be6968-eb64-46a3-9e5f-f5568d764d8d","Type":"ContainerStarted","Data":"4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.248475 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.251975 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9fzp8" event={"ID":"3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c","Type":"ContainerStarted","Data":"fcbfe5ec256184598086aa73d6b354aeb35c48323ae40ddf6515bc4de2f06f7e"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.268186 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" event={"ID":"e81f063c-6cde-4533-ac88-72044b1c8eef","Type":"ContainerStarted","Data":"50f24e209f8ccc5b5ad02c00a2a5b40ee011cb56212df515ee4b0950fe2cb52b"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.272368 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" event={"ID":"d88b1b9a-0913-4310-a020-006c16fd5fff","Type":"ContainerStarted","Data":"d00c06d8825546e8d13fcf0342ff7638d393ef31c9ea291b1f66593d21fe446f"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.273264 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.273881 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.274255 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.774241713 +0000 UTC m=+145.583470587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.274827 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" event={"ID":"b94cc359-b91d-4058-9b99-daee5cb58497","Type":"ContainerStarted","Data":"9c97d735014b82bc8456fc73ff30cf9bf0e79a9c62a6cd7d47740f18c52ad2c1"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.296020 4931 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-h4skc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.296069 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" podUID="d88b1b9a-0913-4310-a020-006c16fd5fff" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.296366 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" event={"ID":"98536bf1-6e84-479c-b744-648cd081d555","Type":"ContainerStarted","Data":"dd83697e485f1cf58a7aa3683c32d524dc08cad6f7c0069f11ab284747b4ded1"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.297190 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.310623 4931 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-sf6b2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.310670 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" podUID="98536bf1-6e84-479c-b744-648cd081d555" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.316051 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" podStartSLOduration=124.316025198 podStartE2EDuration="2m4.316025198s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.313475336 +0000 UTC m=+145.122704210" watchObservedRunningTime="2026-01-31 04:26:26.316025198 +0000 UTC m=+145.125254072" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.317400 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" podStartSLOduration=125.317391672 podStartE2EDuration="2m5.317391672s" podCreationTimestamp="2026-01-31 04:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.282671965 +0000 UTC m=+145.091900839" watchObservedRunningTime="2026-01-31 04:26:26.317391672 +0000 UTC m=+145.126620536" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.320311 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" event={"ID":"e32d743c-c801-41a1-9f35-3b641554339f","Type":"ContainerStarted","Data":"5d75d9d6d318cf4d8da1ae084cbd5c97c2446cb87c4586cfd1a33c1c4ec4df21"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.326053 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" event={"ID":"70ea039d-762a-4d09-a1a3-45d32be0e754","Type":"ContainerStarted","Data":"a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.326965 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.343496 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" podStartSLOduration=124.343477661 podStartE2EDuration="2m4.343477661s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.341046073 +0000 UTC m=+145.150274957" watchObservedRunningTime="2026-01-31 04:26:26.343477661 +0000 UTC m=+145.152706535" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.344936 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" event={"ID":"16b7b136-dad4-4347-9941-d97a23fa694c","Type":"ContainerStarted","Data":"8504b68d3d6eabafee44e4e6a3b3da36a26cf03ea7b8cb8fb6724ce75b16059d"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.344975 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" event={"ID":"16b7b136-dad4-4347-9941-d97a23fa694c","Type":"ContainerStarted","Data":"9b2b0823d8bdaf3617ca3470c1e95e573562e7f404665e4d9f1b23d0cf6966c8"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.347677 4931 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-84x2t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.347731 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" podUID="70ea039d-762a-4d09-a1a3-45d32be0e754" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.352359 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" event={"ID":"0a4f1385-7a3c-4195-8b66-1df921d7187b","Type":"ContainerStarted","Data":"bc5627d518cdb9702d955f8feb8116fd61d07d5ed59dc66a2fa3bd937d829505"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.352401 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" event={"ID":"0a4f1385-7a3c-4195-8b66-1df921d7187b","Type":"ContainerStarted","Data":"f77eb3dcf051544bbf98769039569ed9e935bda4f858eacb88b4e7e60ca1545e"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.357714 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" event={"ID":"29ed5258-385e-4835-89ed-03e3c21cc7cb","Type":"ContainerStarted","Data":"435a51301c9cbc9c178fdbdd0089fad5ee806e345e4ded7c7255a1de07befbbe"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.362477 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" event={"ID":"135975db-0bfc-4fb8-b014-a3e51817c777","Type":"ContainerStarted","Data":"940c507f1cc1e222d70c95e112a269054f2964d8721063b01f49527ab536dea7"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.376799 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" podStartSLOduration=124.376782523 podStartE2EDuration="2m4.376782523s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.373918661 +0000 UTC m=+145.183147545" watchObservedRunningTime="2026-01-31 04:26:26.376782523 +0000 UTC m=+145.186011397" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.380854 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.380922 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.880907356 +0000 UTC m=+145.690136230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.384498 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.387023 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.887008592 +0000 UTC m=+145.696237466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.398420 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" podStartSLOduration=124.398401939 podStartE2EDuration="2m4.398401939s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.397868522 +0000 UTC m=+145.207097396" watchObservedRunningTime="2026-01-31 04:26:26.398401939 +0000 UTC m=+145.207630803" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.402128 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2" event={"ID":"0f58bb1e-365a-4d56-8b48-5cb0e8e12982","Type":"ContainerStarted","Data":"3af69f4a2ea8cf2695d033e8bcb5c5e6e8f82a0a25c62b682c28189d9cb0aca3"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.438487 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" event={"ID":"07927adb-887c-4259-91ca-b46c8f9809e4","Type":"ContainerStarted","Data":"64370b519a2661e88c3448095276b146718a187a730a1750bfe2e5c59de545e8"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.453807 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgjvq" podStartSLOduration=124.453783751 podStartE2EDuration="2m4.453783751s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.442144617 +0000 UTC m=+145.251373491" watchObservedRunningTime="2026-01-31 04:26:26.453783751 +0000 UTC m=+145.263012775" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.458057 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" event={"ID":"ce116733-a153-44bf-838d-7bd7593a3b96","Type":"ContainerStarted","Data":"d414266a0870a25b0c7bcdbd61651ddcf4c1ecf84aa5259443c8c032e757ed66"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.458102 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" event={"ID":"ce116733-a153-44bf-838d-7bd7593a3b96","Type":"ContainerStarted","Data":"dd6f5954ac69e9397ce457ed4e4081f80d6927981a11730af06cf26c21f52b27"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.490342 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-nqffw" podStartSLOduration=124.490315807 podStartE2EDuration="2m4.490315807s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.465386655 +0000 UTC m=+145.274615529" watchObservedRunningTime="2026-01-31 04:26:26.490315807 +0000 UTC m=+145.299544681" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.496902 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.497176 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.997141376 +0000 UTC m=+145.806370250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.497501 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.498545 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:26.998536211 +0000 UTC m=+145.807765085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.499489 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" event={"ID":"a5789585-9fd2-4f7a-9ac7-8ff947f60c2e","Type":"ContainerStarted","Data":"f152310395215973caf4bd7e4eb0cfeadb95dc1b6ec8fd23000126334d415cf0"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.506073 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" event={"ID":"1588ab56-bcb9-4baa-bf88-8edca43a7ba5","Type":"ContainerStarted","Data":"ed74d32b0c670f0fd1a126f61a89e601046d7604452139d5d57031e317a266f9"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.522036 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7pgt9" podStartSLOduration=125.522003866 podStartE2EDuration="2m5.522003866s" podCreationTimestamp="2026-01-31 04:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.520632032 +0000 UTC m=+145.329860906" watchObservedRunningTime="2026-01-31 04:26:26.522003866 +0000 UTC m=+145.331232740" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.525880 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5g5dr" event={"ID":"50968d0b-3fb0-4208-b5df-b1a07c341f0a","Type":"ContainerStarted","Data":"13e9aa77ac90afe6acdf59a0a439907eed3bf9aaec56313c01bac2bd318633e9"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.535028 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" event={"ID":"a74f2108-1c90-40a5-853b-55503384e185","Type":"ContainerStarted","Data":"7f0437e3c92e6d34046a144f64b5a743b3ecfef7775e43d9bf6e893d8807c4ba"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.559996 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" podStartSLOduration=124.559962818 podStartE2EDuration="2m4.559962818s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.551507946 +0000 UTC m=+145.360736820" watchObservedRunningTime="2026-01-31 04:26:26.559962818 +0000 UTC m=+145.369191692" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.585866 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.585920 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.586202 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" event={"ID":"19835a65-0362-4aa9-9152-540437cd2d90","Type":"ContainerStarted","Data":"a67c5c376c816da1f7e02f1c059902b6e896f3050b4d59c61a154d44410866f8"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.586931 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.605157 4931 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qvwbq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.605206 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" podUID="19835a65-0362-4aa9-9152-540437cd2d90" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.606567 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.606597 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grxxg" podStartSLOduration=124.606582138 podStartE2EDuration="2m4.606582138s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.605546635 +0000 UTC m=+145.414775509" watchObservedRunningTime="2026-01-31 04:26:26.606582138 +0000 UTC m=+145.415811012" Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.608430 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:27.108405797 +0000 UTC m=+145.917634671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.642463 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" event={"ID":"1bd05f57-7c4f-4c00-96ce-e8f92338d14d","Type":"ContainerStarted","Data":"4c1e88dc337e859c7892a0ba198b7dc8a940a32e20f663d139bf9edd3f68b644"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.649365 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.663191 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" podStartSLOduration=124.663171939 podStartE2EDuration="2m4.663171939s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.662170207 +0000 UTC m=+145.471399081" watchObservedRunningTime="2026-01-31 04:26:26.663171939 +0000 UTC m=+145.472400813" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.672373 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" event={"ID":"de94b2f3-5852-49ce-81f2-daad119be292","Type":"ContainerStarted","Data":"8b2276f766e0fbf10b9662b4dc531918f8f8ba522e4738d0284cafae0d0c97fa"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.691381 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gctwh" podStartSLOduration=126.691363456 podStartE2EDuration="2m6.691363456s" podCreationTimestamp="2026-01-31 04:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.688963689 +0000 UTC m=+145.498192563" watchObservedRunningTime="2026-01-31 04:26:26.691363456 +0000 UTC m=+145.500592330" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.699580 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" event={"ID":"d7595206-8944-4009-bcd7-f9952d225277","Type":"ContainerStarted","Data":"a4011bd5eb0a3c688cdb4edaa982f67b2230093e333d48081f768b904e4254e0"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.708504 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.708796 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:27.208785097 +0000 UTC m=+146.018013971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.718505 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s5dt7" event={"ID":"fc0e3a20-b429-47e1-8100-aa1fae313bf7","Type":"ContainerStarted","Data":"290840d393e3005da7571ce5451ed4892a956e1197e28f09028d33c7215c8ce5"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.725347 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvt8b" event={"ID":"fd126257-f24f-4597-8e75-e6d1c24e8709","Type":"ContainerStarted","Data":"4cee860b722f2f03ba25b96d2e4e751889e53aaebced8b6bc788e37c29915d38"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.726402 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jvt8b" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.738263 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-49jws" podStartSLOduration=124.738248145 podStartE2EDuration="2m4.738248145s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.737756569 +0000 UTC m=+145.546985443" watchObservedRunningTime="2026-01-31 04:26:26.738248145 +0000 UTC m=+145.547477019" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.738487 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvt8b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.738523 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvt8b" podUID="fd126257-f24f-4597-8e75-e6d1c24e8709" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.738869 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" event={"ID":"10bb0556-24cb-479a-bbde-e360630fe24a","Type":"ContainerStarted","Data":"38a21b7fd78b1f1d9a9f63a3f975101dc053165f36fdcf8ee18cb5812942f3d8"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.750700 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.756538 4931 generic.go:334] "Generic (PLEG): container finished" podID="7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea" containerID="f33de3cf8ad5f42ad92064c569bdfe085fa51045dd91958a330bfd1149a6a184" exitCode=0 Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.756808 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" event={"ID":"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea","Type":"ContainerDied","Data":"f33de3cf8ad5f42ad92064c569bdfe085fa51045dd91958a330bfd1149a6a184"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.781572 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" event={"ID":"39de16fa-35f7-4286-8160-f29fd1059389","Type":"ContainerStarted","Data":"59d625e4882df8d2c89ec53b20f2c48c916de09a82b45f9599e8d6e6015c4003"} Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.783395 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:26 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:26 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:26 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.783424 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.805875 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjk2" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.811325 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.813768 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:27.313747135 +0000 UTC m=+146.122976009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.823444 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6vx6" podStartSLOduration=124.823427846 podStartE2EDuration="2m4.823427846s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.781804677 +0000 UTC m=+145.591033551" watchObservedRunningTime="2026-01-31 04:26:26.823427846 +0000 UTC m=+145.632656730" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.897405 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bmg8v" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.914866 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:26 crc kubenswrapper[4931]: E0131 04:26:26.915165 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:27.415152188 +0000 UTC m=+146.224381062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.916618 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-s5dt7" podStartSLOduration=124.916608445 podStartE2EDuration="2m4.916608445s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.915609543 +0000 UTC m=+145.724838417" watchObservedRunningTime="2026-01-31 04:26:26.916608445 +0000 UTC m=+145.725837309" Jan 31 04:26:26 crc kubenswrapper[4931]: I0131 04:26:26.984131 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-894fk" podStartSLOduration=124.984116817 podStartE2EDuration="2m4.984116817s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.954032319 +0000 UTC m=+145.763261193" watchObservedRunningTime="2026-01-31 04:26:26.984116817 +0000 UTC m=+145.793345691" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.016172 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.017130 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:27.517104349 +0000 UTC m=+146.326333223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.025791 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" podStartSLOduration=125.025768647 podStartE2EDuration="2m5.025768647s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:26.984493209 +0000 UTC m=+145.793722073" watchObservedRunningTime="2026-01-31 04:26:27.025768647 +0000 UTC m=+145.834997521" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.119681 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.123454 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:27.62343348 +0000 UTC m=+146.432662354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.132082 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" podStartSLOduration=126.132051408 podStartE2EDuration="2m6.132051408s" podCreationTimestamp="2026-01-31 04:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:27.131359905 +0000 UTC m=+145.940588809" watchObservedRunningTime="2026-01-31 04:26:27.132051408 +0000 UTC m=+145.941280312" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.132490 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" podStartSLOduration=125.132483032 podStartE2EDuration="2m5.132483032s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:27.087593177 +0000 UTC m=+145.896822051" watchObservedRunningTime="2026-01-31 04:26:27.132483032 +0000 UTC m=+145.941711926" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.169271 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jvt8b" podStartSLOduration=125.169253205 podStartE2EDuration="2m5.169253205s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:27.165818894 +0000 UTC m=+145.975047768" watchObservedRunningTime="2026-01-31 04:26:27.169253205 +0000 UTC m=+145.978482079" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.223781 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.224142 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:27.724125381 +0000 UTC m=+146.533354255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.254814 4931 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-bmrsg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.255138 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.329628 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.330061 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:27.830045349 +0000 UTC m=+146.639274223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.434364 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.435058 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:27.935042068 +0000 UTC m=+146.744270942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.536194 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.536660 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.036643947 +0000 UTC m=+146.845872821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.638186 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.638565 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.138540566 +0000 UTC m=+146.947769440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.739449 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.740235 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.240209788 +0000 UTC m=+147.049438852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.751182 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:27 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:27 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:27 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.751294 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.791429 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" event={"ID":"13d327eb-39b7-4a67-8a2f-a372ccbbd5de","Type":"ContainerStarted","Data":"0ccd6b95462e8ccfe716c3bb74f47b317ec23e9b03d5cd2602f879e6f4fcc458"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.791537 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.803306 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" event={"ID":"1588ab56-bcb9-4baa-bf88-8edca43a7ba5","Type":"ContainerStarted","Data":"dbc11d502e105d4cbb2965677f74ca7fa554e8e754761eac52f43d3ab229b2f7"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.811843 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" event={"ID":"fefdfe52-daca-429b-af44-e9f855996b8e","Type":"ContainerStarted","Data":"c456020d071eb444e1ee83c3f99a75a1571bc065f7dcef2778de266191985279"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.811907 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" event={"ID":"fefdfe52-daca-429b-af44-e9f855996b8e","Type":"ContainerStarted","Data":"63db6181901d016dbd8415e9e3b788b4a445bce94ba8cd0f7873ec3c57abdb68"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.812647 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.813147 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" podStartSLOduration=125.813132985 podStartE2EDuration="2m5.813132985s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:27.811308376 +0000 UTC m=+146.620537250" watchObservedRunningTime="2026-01-31 04:26:27.813132985 +0000 UTC m=+146.622361859" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.820749 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fd6j6" event={"ID":"f08b24fc-253c-44c8-a272-78f4601644b1","Type":"ContainerStarted","Data":"d5c7cd80023a24e4fcc975eff1237def414c195c3003b5c98a6687d03ecbe08e"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.831598 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2" event={"ID":"0f58bb1e-365a-4d56-8b48-5cb0e8e12982","Type":"ContainerStarted","Data":"8fa0bcdd60bb0967fbfb4ab92d9142883c32023d556ba1555b84847c0f657862"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.831635 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2" event={"ID":"0f58bb1e-365a-4d56-8b48-5cb0e8e12982","Type":"ContainerStarted","Data":"d32c692670115a2631d9704467d2c5f40a80b8bc544125cdb9f97cf48e004692"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.840797 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.841422 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.341384034 +0000 UTC m=+147.150612908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.842319 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.842898 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.342887012 +0000 UTC m=+147.152115886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.844662 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tvsqj" event={"ID":"10bb0556-24cb-479a-bbde-e360630fe24a","Type":"ContainerStarted","Data":"4edba5c54b90393f110cb6b16c5855d691d0d5831725742d675cbc4a770557be"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.865373 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-x6pwq" podStartSLOduration=125.865344925 podStartE2EDuration="2m5.865344925s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:27.836749615 +0000 UTC m=+146.645978489" watchObservedRunningTime="2026-01-31 04:26:27.865344925 +0000 UTC m=+146.674573799" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.866213 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q7slc" event={"ID":"770df778-b615-43fd-a60d-914f5691e3ac","Type":"ContainerStarted","Data":"9db6831cf9c757532b7aeb60a79abc55b9582403439ebd7113a19c13a065b9c2"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.878104 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" event={"ID":"c7be93e4-9ead-466c-90d5-a6b53cc0c1fd","Type":"ContainerStarted","Data":"5d6310e3c80fdb420b136d69bdd46a2f213a976d7266d682243ba78b13bf3d66"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.889586 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" podStartSLOduration=125.889564754 podStartE2EDuration="2m5.889564754s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:27.866619866 +0000 UTC m=+146.675848760" watchObservedRunningTime="2026-01-31 04:26:27.889564754 +0000 UTC m=+146.698793628" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.938849 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" event={"ID":"ce116733-a153-44bf-838d-7bd7593a3b96","Type":"ContainerStarted","Data":"f7a2fcfb711779a381c2735a0d5ba7752b29a6c34e3b7debc45c1a93ca8381d2"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.941570 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p5px" podStartSLOduration=125.941543097 podStartE2EDuration="2m5.941543097s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:27.938289792 +0000 UTC m=+146.747518666" watchObservedRunningTime="2026-01-31 04:26:27.941543097 +0000 UTC m=+146.750771971" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.942029 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zlqj2" podStartSLOduration=125.942024132 podStartE2EDuration="2m5.942024132s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:27.888618224 +0000 UTC m=+146.697847098" watchObservedRunningTime="2026-01-31 04:26:27.942024132 +0000 UTC m=+146.751253006" Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.943685 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.944262 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.444239864 +0000 UTC m=+147.253468738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.944370 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:27 crc kubenswrapper[4931]: E0131 04:26:27.948145 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.448134329 +0000 UTC m=+147.257363203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.967619 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkzm6" event={"ID":"433c2cfb-3f6c-4fe2-9289-237f398dea0b","Type":"ContainerStarted","Data":"0681b3b03fca087d04df04f9dfdbd617f88255edb33cea55e550ece5df58170d"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.983839 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" event={"ID":"135975db-0bfc-4fb8-b014-a3e51817c777","Type":"ContainerStarted","Data":"33e99b79017a21f89580f7ed8c6b0c899216030935658931bbfba5dc0fb37acc"} Jan 31 04:26:27 crc kubenswrapper[4931]: I0131 04:26:27.987048 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9gcsx" podStartSLOduration=125.987034501 podStartE2EDuration="2m5.987034501s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:27.984595312 +0000 UTC m=+146.793824196" watchObservedRunningTime="2026-01-31 04:26:27.987034501 +0000 UTC m=+146.796263375" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.002115 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" event={"ID":"e32d743c-c801-41a1-9f35-3b641554339f","Type":"ContainerStarted","Data":"78ee6b3b22299b8e4b727e4a30cd10d6b87633e005b3c7d5cbc49c161f15c377"} Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.002168 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" event={"ID":"e32d743c-c801-41a1-9f35-3b641554339f","Type":"ContainerStarted","Data":"c2f1510fce8122af2fc5b58ba1c5fac74df1aae1ddd5a53dbef46634fe23e24d"} Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.004199 4931 csr.go:261] certificate signing request csr-pdcn6 is approved, waiting to be issued Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.020367 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4w7p4" podStartSLOduration=126.020343723 podStartE2EDuration="2m6.020343723s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:28.015122445 +0000 UTC m=+146.824351359" watchObservedRunningTime="2026-01-31 04:26:28.020343723 +0000 UTC m=+146.829572597" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.030651 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" event={"ID":"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea","Type":"ContainerStarted","Data":"9164fa2a71fe62e0affbc1f330422764a894dfd245d176cc379016f553e62ede"} Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.044866 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-dm9q4" podStartSLOduration=126.044844061 podStartE2EDuration="2m6.044844061s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:28.043206898 +0000 UTC m=+146.852435772" watchObservedRunningTime="2026-01-31 04:26:28.044844061 +0000 UTC m=+146.854072925" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.048155 4931 csr.go:257] certificate signing request csr-pdcn6 is issued Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.048487 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.048866 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.54884794 +0000 UTC m=+147.358076814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.050142 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9fzp8" event={"ID":"3d4c4bbd-a5bf-414f-bbd0-9f27401aac5c","Type":"ContainerStarted","Data":"cebb304e28836eb46b189dc461aa7580370f0a56e4ba814e57d910b52abcedd3"} Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.066581 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvr69" event={"ID":"e81f063c-6cde-4533-ac88-72044b1c8eef","Type":"ContainerStarted","Data":"ea419d5c89df58751aeadeb9745cebed568b9a29a6d441100c682c7c1763fd49"} Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.072042 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" event={"ID":"b94cc359-b91d-4058-9b99-daee5cb58497","Type":"ContainerStarted","Data":"2214ded12636a10761e8ebbc7d9272c977396c766915fdceeaefef57bd936a80"} Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.072668 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.077953 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" event={"ID":"0a4f1385-7a3c-4195-8b66-1df921d7187b","Type":"ContainerStarted","Data":"fa50586ac9fc893692a2a168f451a66c358915a7aff8028adc1167759bdbe0e4"} Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.091599 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" podStartSLOduration=127.091581745 podStartE2EDuration="2m7.091581745s" podCreationTimestamp="2026-01-31 04:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:28.077569054 +0000 UTC m=+146.886797928" watchObservedRunningTime="2026-01-31 04:26:28.091581745 +0000 UTC m=+146.900810619" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.093238 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5g5dr" event={"ID":"50968d0b-3fb0-4208-b5df-b1a07c341f0a","Type":"ContainerStarted","Data":"241c4969b3ec1152ed0ff692b5550845b20c8c56f83d1b2ae6bca15eaeb4f162"} Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.093308 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5g5dr" event={"ID":"50968d0b-3fb0-4208-b5df-b1a07c341f0a","Type":"ContainerStarted","Data":"ddab50ef338ab3e6f7fcffaf2b5365b38d742560b04ed0c3c7c1f35c08e1d1cc"} Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.097084 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvt8b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.097139 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvt8b" podUID="fd126257-f24f-4597-8e75-e6d1c24e8709" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.097810 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.098168 4931 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-drqrf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.098232 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.109466 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.109704 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4skc" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.109942 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sf6b2" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.149926 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.172426 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" podStartSLOduration=126.172397076 podStartE2EDuration="2m6.172397076s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:28.110141392 +0000 UTC m=+146.919370266" watchObservedRunningTime="2026-01-31 04:26:28.172397076 +0000 UTC m=+146.981625950" Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.179249 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.679230216 +0000 UTC m=+147.488459090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.204647 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9fzp8" podStartSLOduration=8.204618853 podStartE2EDuration="8.204618853s" podCreationTimestamp="2026-01-31 04:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:28.17161 +0000 UTC m=+146.980838874" watchObservedRunningTime="2026-01-31 04:26:28.204618853 +0000 UTC m=+147.013847727" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.217025 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-78z2q" podStartSLOduration=126.217001701 podStartE2EDuration="2m6.217001701s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:28.214370756 +0000 UTC m=+147.023599630" watchObservedRunningTime="2026-01-31 04:26:28.217001701 +0000 UTC m=+147.026230565" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.252933 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.255021 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.755003274 +0000 UTC m=+147.564232148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.305102 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.355462 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.355795 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.855783117 +0000 UTC m=+147.665011991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.457351 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.457583 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.957550862 +0000 UTC m=+147.766779736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.457764 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.458079 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:28.958066229 +0000 UTC m=+147.767295173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.559436 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.559663 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.059633197 +0000 UTC m=+147.868862071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.560063 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.560410 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.060402282 +0000 UTC m=+147.869631156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.661770 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.661998 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.16196659 +0000 UTC m=+147.971195464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.662095 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.662228 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.662266 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.662648 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.162630271 +0000 UTC m=+147.971859145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.664738 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.666385 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5g5dr" podStartSLOduration=8.666366642 podStartE2EDuration="8.666366642s" podCreationTimestamp="2026-01-31 04:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:28.665108351 +0000 UTC m=+147.474337225" watchObservedRunningTime="2026-01-31 04:26:28.666366642 +0000 UTC m=+147.475595516" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.671734 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.759973 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:28 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:28 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:28 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.760034 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.768218 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.768314 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.268294072 +0000 UTC m=+148.077522946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.768494 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.768525 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.768550 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.768880 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.2688677 +0000 UTC m=+148.078096574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.778090 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.793976 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.825102 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.843628 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.860009 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.870951 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.873071 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.373048172 +0000 UTC m=+148.182277046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.972046 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:28 crc kubenswrapper[4931]: E0131 04:26:28.972466 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.47243473 +0000 UTC m=+148.281663604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:28 crc kubenswrapper[4931]: I0131 04:26:28.977881 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qvwbq" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.050525 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 04:21:28 +0000 UTC, rotation deadline is 2026-10-28 04:35:52.025578999 +0000 UTC Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.050974 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6480h9m22.974607752s for next certificate rotation Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.073563 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.073762 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.57373672 +0000 UTC m=+148.382965594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.073922 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.074413 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.574397121 +0000 UTC m=+148.383625995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.124129 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" event={"ID":"7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea","Type":"ContainerStarted","Data":"cb4a9c0df97b0a0ac77256cca9fa3aaf29b2e2d82e12fc5d982997e1d76a3aec"} Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.126904 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q7slc" event={"ID":"770df778-b615-43fd-a60d-914f5691e3ac","Type":"ContainerStarted","Data":"1ab239ff351fe1d0bb54f4f2ec799b4346f3096e22e8602846f243c1a7843ec5"} Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.131005 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvt8b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.131049 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvt8b" podUID="fd126257-f24f-4597-8e75-e6d1c24e8709" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.140971 4931 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-drqrf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.141021 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.175075 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.175453 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.675437513 +0000 UTC m=+148.484666387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.183045 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbw2z" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.279618 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.291355 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.791330242 +0000 UTC m=+148.600559116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.387660 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.387929 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.887890749 +0000 UTC m=+148.697119623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.388434 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.388796 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.888779628 +0000 UTC m=+148.698008502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.430653 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2h5z9"] Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.431637 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.436493 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2h5z9"] Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.445974 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.489639 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.490016 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:29.990000925 +0000 UTC m=+148.799229799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.559414 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfs9t"] Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.560440 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.565132 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.593547 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.593634 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmhjk\" (UniqueName: \"kubernetes.io/projected/6fd4e055-c0fd-4afb-873d-9920d5765466-kube-api-access-tmhjk\") pod \"community-operators-2h5z9\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.593687 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-utilities\") pod \"community-operators-2h5z9\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.593759 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-catalog-content\") pod \"community-operators-2h5z9\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.594099 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:30.094087305 +0000 UTC m=+148.903316179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.614886 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfs9t"] Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.695400 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.695590 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-242mv\" (UniqueName: \"kubernetes.io/projected/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-kube-api-access-242mv\") pod \"certified-operators-sfs9t\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.695626 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmhjk\" (UniqueName: \"kubernetes.io/projected/6fd4e055-c0fd-4afb-873d-9920d5765466-kube-api-access-tmhjk\") pod \"community-operators-2h5z9\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.695647 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-utilities\") pod \"community-operators-2h5z9\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.695679 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-utilities\") pod \"certified-operators-sfs9t\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.695706 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-catalog-content\") pod \"community-operators-2h5z9\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.695749 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-catalog-content\") pod \"certified-operators-sfs9t\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.695863 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:30.195848089 +0000 UTC m=+149.005076963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.696436 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-utilities\") pod \"community-operators-2h5z9\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.696546 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-catalog-content\") pod \"community-operators-2h5z9\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.723371 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nnmk2"] Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.729529 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.735043 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmhjk\" (UniqueName: \"kubernetes.io/projected/6fd4e055-c0fd-4afb-873d-9920d5765466-kube-api-access-tmhjk\") pod \"community-operators-2h5z9\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.755087 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nnmk2"] Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.765935 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:29 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:29 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:29 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.766156 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.796742 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-242mv\" (UniqueName: \"kubernetes.io/projected/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-kube-api-access-242mv\") pod \"certified-operators-sfs9t\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.797186 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2zf2\" (UniqueName: \"kubernetes.io/projected/909dcbcc-c48a-4180-8b14-524e5839eaef-kube-api-access-x2zf2\") pod \"community-operators-nnmk2\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.797217 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-utilities\") pod \"community-operators-nnmk2\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.797263 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-catalog-content\") pod \"community-operators-nnmk2\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.797293 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-utilities\") pod \"certified-operators-sfs9t\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.797339 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-catalog-content\") pod \"certified-operators-sfs9t\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.797376 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.797705 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:30.297690597 +0000 UTC m=+149.106919471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.798914 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-utilities\") pod \"certified-operators-sfs9t\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.799295 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-catalog-content\") pod \"certified-operators-sfs9t\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.818059 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.837367 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-242mv\" (UniqueName: \"kubernetes.io/projected/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-kube-api-access-242mv\") pod \"certified-operators-sfs9t\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.900235 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.900436 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2zf2\" (UniqueName: \"kubernetes.io/projected/909dcbcc-c48a-4180-8b14-524e5839eaef-kube-api-access-x2zf2\") pod \"community-operators-nnmk2\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.900459 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-utilities\") pod \"community-operators-nnmk2\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.900491 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-catalog-content\") pod \"community-operators-nnmk2\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.900930 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-catalog-content\") pod \"community-operators-nnmk2\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: E0131 04:26:29.900999 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:30.400984531 +0000 UTC m=+149.210213405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.901653 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-utilities\") pod \"community-operators-nnmk2\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.949792 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62qpd"] Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.950668 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.967576 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2zf2\" (UniqueName: \"kubernetes.io/projected/909dcbcc-c48a-4180-8b14-524e5839eaef-kube-api-access-x2zf2\") pod \"community-operators-nnmk2\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.974500 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:26:29 crc kubenswrapper[4931]: I0131 04:26:29.991897 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62qpd"] Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.002511 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.004452 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:30.50443726 +0000 UTC m=+149.313666134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.107892 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.108041 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmtj6\" (UniqueName: \"kubernetes.io/projected/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-kube-api-access-cmtj6\") pod \"certified-operators-62qpd\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.108114 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-utilities\") pod \"certified-operators-62qpd\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.108165 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-catalog-content\") pod \"certified-operators-62qpd\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.108284 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:30.608259551 +0000 UTC m=+149.417488425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.121152 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:26:30 crc kubenswrapper[4931]: W0131 04:26:30.197100 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-4c8a54c1fda0adc88c2915bdb0e4c5f3548039b58a78502dd0f4d7f28da94f95 WatchSource:0}: Error finding container 4c8a54c1fda0adc88c2915bdb0e4c5f3548039b58a78502dd0f4d7f28da94f95: Status 404 returned error can't find the container with id 4c8a54c1fda0adc88c2915bdb0e4c5f3548039b58a78502dd0f4d7f28da94f95 Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.214416 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.214457 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-utilities\") pod \"certified-operators-62qpd\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.214510 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-catalog-content\") pod \"certified-operators-62qpd\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.214527 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmtj6\" (UniqueName: \"kubernetes.io/projected/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-kube-api-access-cmtj6\") pod \"certified-operators-62qpd\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.215011 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:30.715001056 +0000 UTC m=+149.524229930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.215548 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-catalog-content\") pod \"certified-operators-62qpd\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.215615 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-utilities\") pod \"certified-operators-62qpd\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.268409 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q7slc" event={"ID":"770df778-b615-43fd-a60d-914f5691e3ac","Type":"ContainerStarted","Data":"99a464effd6c685b37c53f9aebb7610372496dcee445a3eceaba9ea88a32386f"} Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.269907 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmtj6\" (UniqueName: \"kubernetes.io/projected/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-kube-api-access-cmtj6\") pod \"certified-operators-62qpd\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.291844 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.316957 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.317591 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:30.817565596 +0000 UTC m=+149.626794470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.419493 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.429540 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:30.929518619 +0000 UTC m=+149.738747493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.514337 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.517125 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.521980 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.524118 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.524804 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.525690 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:31.025668883 +0000 UTC m=+149.834897747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.562522 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.628407 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6afef2d1-159a-42d8-a30b-93affe8e2e00-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6afef2d1-159a-42d8-a30b-93affe8e2e00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.628459 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6afef2d1-159a-42d8-a30b-93affe8e2e00-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6afef2d1-159a-42d8-a30b-93affe8e2e00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.628514 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.628826 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:31.128814742 +0000 UTC m=+149.938043616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.639224 4931 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.654495 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2h5z9"] Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.731589 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.732245 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6afef2d1-159a-42d8-a30b-93affe8e2e00-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6afef2d1-159a-42d8-a30b-93affe8e2e00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.732296 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6afef2d1-159a-42d8-a30b-93affe8e2e00-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6afef2d1-159a-42d8-a30b-93affe8e2e00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.732420 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:31.232399915 +0000 UTC m=+150.041628789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.732547 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6afef2d1-159a-42d8-a30b-93affe8e2e00-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6afef2d1-159a-42d8-a30b-93affe8e2e00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.788571 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6afef2d1-159a-42d8-a30b-93affe8e2e00-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6afef2d1-159a-42d8-a30b-93affe8e2e00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.797599 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:30 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:30 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:30 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.798476 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.800523 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfs9t"] Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.833430 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.833708 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:31.333698405 +0000 UTC m=+150.142927279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.859619 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nnmk2"] Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.896008 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:30 crc kubenswrapper[4931]: I0131 04:26:30.940246 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:30 crc kubenswrapper[4931]: E0131 04:26:30.940664 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:26:31.440644877 +0000 UTC m=+150.249873751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.041549 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:31 crc kubenswrapper[4931]: E0131 04:26:31.041918 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:26:31.541906465 +0000 UTC m=+150.351135329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w528s" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.113183 4931 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T04:26:30.639242428Z","Handler":null,"Name":""} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.126440 4931 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.126492 4931 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.142179 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.186787 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62qpd"] Jan 31 04:26:31 crc kubenswrapper[4931]: W0131 04:26:31.209977 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73bfe314_a35b_4a93_b1cb_f1f7a4b2756c.slice/crio-3aba2a90ab86b8411d601ee13e005e87ad037dd4258a18147ea332500c544763 WatchSource:0}: Error finding container 3aba2a90ab86b8411d601ee13e005e87ad037dd4258a18147ea332500c544763: Status 404 returned error can't find the container with id 3aba2a90ab86b8411d601ee13e005e87ad037dd4258a18147ea332500c544763 Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.239487 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.244601 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.258031 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.258065 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.322481 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h5z9" event={"ID":"6fd4e055-c0fd-4afb-873d-9920d5765466","Type":"ContainerStarted","Data":"d6ae0a71c09f272c62d8b893571af2c4bdfe37878fa5b6ab0016cc2ae940b980"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.323022 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h5z9" event={"ID":"6fd4e055-c0fd-4afb-873d-9920d5765466","Type":"ContainerStarted","Data":"6e1165c58b13bf8c2ae9ae9ac1b2c9a2f4e5b519723ee1ef4a67492b4112ec6d"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.325463 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w528s\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.357952 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62qpd" event={"ID":"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c","Type":"ContainerStarted","Data":"3aba2a90ab86b8411d601ee13e005e87ad037dd4258a18147ea332500c544763"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.378056 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmk2" event={"ID":"909dcbcc-c48a-4180-8b14-524e5839eaef","Type":"ContainerStarted","Data":"29ccaccf88b8a358bcf9d2f17553ad3a9bd2c9c1fafca9148bf0cb897810c5b6"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.384966 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.428034 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f2928dd067d26f64a9ca755667f6bf3aec309d7190f70bfb00dc68d9cfb8013f"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.428098 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"95f92b710614ca229ac117c2268511b1ca6d25f93891ada2c24b94e5a6612d6a"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.466854 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfs9t" event={"ID":"471d6f3b-ab9a-414a-b14f-d719d2d5e96a","Type":"ContainerStarted","Data":"a0969411cfd7c069adcce155c547546b270abbedd318d1ed26158d9ec11efeff"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.466902 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfs9t" event={"ID":"471d6f3b-ab9a-414a-b14f-d719d2d5e96a","Type":"ContainerStarted","Data":"726179c5148e24fe50044539198d323c71ceeac8f289d9232013fe9684098761"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.499470 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bee95997b1716da5e211edc424e13e56615f0d5ac36e8f8cb471facc80661865"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.499517 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4c8a54c1fda0adc88c2915bdb0e4c5f3548039b58a78502dd0f4d7f28da94f95"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.500123 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.508582 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.509070 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.523914 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"58cfd4cad41fedf925e59fdaa13e61fe2c7b5727f081d03f03bdfff0e95d234b"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.523950 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a51742df596b2d770e33ec963fa37cd091aa91f3219710dd9d21c73161ab9b7a"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.531368 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jmxsb"] Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.534961 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.542125 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.627071 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmxsb"] Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.641180 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q7slc" event={"ID":"770df778-b615-43fd-a60d-914f5691e3ac","Type":"ContainerStarted","Data":"7eaaf1862ee9e41f291affa3225fbc83d207c93eac35142d02be0e7f54f18572"} Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.673055 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-catalog-content\") pod \"redhat-marketplace-jmxsb\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.673111 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6sh\" (UniqueName: \"kubernetes.io/projected/2bff2dda-54db-4a36-a7a5-af34cf3367dc-kube-api-access-6p6sh\") pod \"redhat-marketplace-jmxsb\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.673235 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-utilities\") pod \"redhat-marketplace-jmxsb\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.706213 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-q7slc" podStartSLOduration=11.706185852 podStartE2EDuration="11.706185852s" podCreationTimestamp="2026-01-31 04:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:31.70334786 +0000 UTC m=+150.512576724" watchObservedRunningTime="2026-01-31 04:26:31.706185852 +0000 UTC m=+150.515414726" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.755295 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:31 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:31 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:31 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.755355 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.774467 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-utilities\") pod \"redhat-marketplace-jmxsb\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.774553 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-catalog-content\") pod \"redhat-marketplace-jmxsb\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.774572 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p6sh\" (UniqueName: \"kubernetes.io/projected/2bff2dda-54db-4a36-a7a5-af34cf3367dc-kube-api-access-6p6sh\") pod \"redhat-marketplace-jmxsb\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.775932 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-utilities\") pod \"redhat-marketplace-jmxsb\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.776193 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-catalog-content\") pod \"redhat-marketplace-jmxsb\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.819169 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p6sh\" (UniqueName: \"kubernetes.io/projected/2bff2dda-54db-4a36-a7a5-af34cf3367dc-kube-api-access-6p6sh\") pod \"redhat-marketplace-jmxsb\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.910658 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.924103 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.924275 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sxzrv"] Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.927965 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:31 crc kubenswrapper[4931]: I0131 04:26:31.946186 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxzrv"] Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.088554 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-utilities\") pod \"redhat-marketplace-sxzrv\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.088627 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-catalog-content\") pod \"redhat-marketplace-sxzrv\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.088698 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmjj\" (UniqueName: \"kubernetes.io/projected/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-kube-api-access-7hmjj\") pod \"redhat-marketplace-sxzrv\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.132399 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w528s"] Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.190887 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hmjj\" (UniqueName: \"kubernetes.io/projected/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-kube-api-access-7hmjj\") pod \"redhat-marketplace-sxzrv\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.190957 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-utilities\") pod \"redhat-marketplace-sxzrv\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.190992 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-catalog-content\") pod \"redhat-marketplace-sxzrv\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.191387 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-catalog-content\") pod \"redhat-marketplace-sxzrv\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.191601 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-utilities\") pod \"redhat-marketplace-sxzrv\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.251826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hmjj\" (UniqueName: \"kubernetes.io/projected/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-kube-api-access-7hmjj\") pod \"redhat-marketplace-sxzrv\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.254494 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.344144 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmxsb"] Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.510563 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wpx9n"] Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.512247 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.516007 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.529201 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wpx9n"] Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.598085 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxzrv"] Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.601496 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4mrx\" (UniqueName: \"kubernetes.io/projected/a2e61b47-71de-4eff-a485-7ace762bad74-kube-api-access-h4mrx\") pod \"redhat-operators-wpx9n\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.601677 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-catalog-content\") pod \"redhat-operators-wpx9n\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.601713 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-utilities\") pod \"redhat-operators-wpx9n\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.662081 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxzrv" event={"ID":"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b","Type":"ContainerStarted","Data":"59c5d537c4fb0735a142fe3b14baa0b5bf9b26e2282e1ca93d40b5caa6dee21f"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.702583 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4mrx\" (UniqueName: \"kubernetes.io/projected/a2e61b47-71de-4eff-a485-7ace762bad74-kube-api-access-h4mrx\") pod \"redhat-operators-wpx9n\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.702677 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-catalog-content\") pod \"redhat-operators-wpx9n\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.702716 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-utilities\") pod \"redhat-operators-wpx9n\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.703883 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-utilities\") pod \"redhat-operators-wpx9n\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.703927 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-catalog-content\") pod \"redhat-operators-wpx9n\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.707627 4931 generic.go:334] "Generic (PLEG): container finished" podID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerID="53f14c9fe0fce15c8a4024c1b24f60d7b8512056856c96a7048e3a51774325f7" exitCode=0 Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.707748 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62qpd" event={"ID":"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c","Type":"ContainerDied","Data":"53f14c9fe0fce15c8a4024c1b24f60d7b8512056856c96a7048e3a51774325f7"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.722873 4931 generic.go:334] "Generic (PLEG): container finished" podID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerID="0a1a9ed1ffeb3136c8fa43e85127a5d37922db12fd616281353f3fd7a1849f98" exitCode=0 Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.723043 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmxsb" event={"ID":"2bff2dda-54db-4a36-a7a5-af34cf3367dc","Type":"ContainerDied","Data":"0a1a9ed1ffeb3136c8fa43e85127a5d37922db12fd616281353f3fd7a1849f98"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.723086 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmxsb" event={"ID":"2bff2dda-54db-4a36-a7a5-af34cf3367dc","Type":"ContainerStarted","Data":"aa0542ad661fdb289de312b857bc92d01f2a34ce56eec9f0d177e436803ced8c"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.729118 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4mrx\" (UniqueName: \"kubernetes.io/projected/a2e61b47-71de-4eff-a485-7ace762bad74-kube-api-access-h4mrx\") pod \"redhat-operators-wpx9n\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.740172 4931 generic.go:334] "Generic (PLEG): container finished" podID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerID="d6ae0a71c09f272c62d8b893571af2c4bdfe37878fa5b6ab0016cc2ae940b980" exitCode=0 Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.740243 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h5z9" event={"ID":"6fd4e055-c0fd-4afb-873d-9920d5765466","Type":"ContainerDied","Data":"d6ae0a71c09f272c62d8b893571af2c4bdfe37878fa5b6ab0016cc2ae940b980"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.754930 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:32 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:32 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:32 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.755020 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.758248 4931 generic.go:334] "Generic (PLEG): container finished" podID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerID="d0d56d7005b79638fbef8144e9c58fc4db8b45cfeda8bad506eee895d03780ce" exitCode=0 Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.758372 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmk2" event={"ID":"909dcbcc-c48a-4180-8b14-524e5839eaef","Type":"ContainerDied","Data":"d0d56d7005b79638fbef8144e9c58fc4db8b45cfeda8bad506eee895d03780ce"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.761788 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" event={"ID":"928c64fe-ab31-4942-9bdd-64ab8b8339aa","Type":"ContainerStarted","Data":"3c3ec3e567bbdff59dacfdb58ac9ca1cb2319c97dab9b819f72417a98c7ddde2"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.761843 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" event={"ID":"928c64fe-ab31-4942-9bdd-64ab8b8339aa","Type":"ContainerStarted","Data":"08ca779c76d87b2ff92b6989ac213e472a51989a3e6133f4f950e3835843345d"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.762197 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.788347 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6afef2d1-159a-42d8-a30b-93affe8e2e00","Type":"ContainerStarted","Data":"c41a03c4934f28d6219005714ec5c47020cdcf55dd7b9400073818962bb92647"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.788769 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6afef2d1-159a-42d8-a30b-93affe8e2e00","Type":"ContainerStarted","Data":"dfdfc803e0fd6d3f4beea5611332fd009e957e3f92df697101af6ac0fc410ad0"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.790682 4931 generic.go:334] "Generic (PLEG): container finished" podID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerID="a0969411cfd7c069adcce155c547546b270abbedd318d1ed26158d9ec11efeff" exitCode=0 Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.790756 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfs9t" event={"ID":"471d6f3b-ab9a-414a-b14f-d719d2d5e96a","Type":"ContainerDied","Data":"a0969411cfd7c069adcce155c547546b270abbedd318d1ed26158d9ec11efeff"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.797671 4931 generic.go:334] "Generic (PLEG): container finished" podID="d7595206-8944-4009-bcd7-f9952d225277" containerID="a4011bd5eb0a3c688cdb4edaa982f67b2230093e333d48081f768b904e4254e0" exitCode=0 Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.797853 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" event={"ID":"d7595206-8944-4009-bcd7-f9952d225277","Type":"ContainerDied","Data":"a4011bd5eb0a3c688cdb4edaa982f67b2230093e333d48081f768b904e4254e0"} Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.817130 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" podStartSLOduration=130.817087999 podStartE2EDuration="2m10.817087999s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:32.816544622 +0000 UTC m=+151.625773496" watchObservedRunningTime="2026-01-31 04:26:32.817087999 +0000 UTC m=+151.626316873" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.841786 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.881057 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.881037757 podStartE2EDuration="2.881037757s" podCreationTimestamp="2026-01-31 04:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:32.876409548 +0000 UTC m=+151.685638422" watchObservedRunningTime="2026-01-31 04:26:32.881037757 +0000 UTC m=+151.690266631" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.918137 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-brnp5"] Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.919518 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.931605 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brnp5"] Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.957922 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.957956 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.958795 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:32 crc kubenswrapper[4931]: I0131 04:26:32.958816 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.006455 4931 patch_prober.go:28] interesting pod/console-f9d7485db-8hh7p container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.006562 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8hh7p" podUID="d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.010958 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp96j\" (UniqueName: \"kubernetes.io/projected/c3f1936c-4896-4468-b5c3-958691b633b7-kube-api-access-jp96j\") pod \"redhat-operators-brnp5\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.011063 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-utilities\") pod \"redhat-operators-brnp5\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.011118 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-catalog-content\") pod \"redhat-operators-brnp5\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.012540 4931 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bsjw7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]log ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]etcd ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/max-in-flight-filter ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 04:26:33 crc kubenswrapper[4931]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 04:26:33 crc kubenswrapper[4931]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/openshift.io-startinformers ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 04:26:33 crc kubenswrapper[4931]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 04:26:33 crc kubenswrapper[4931]: livez check failed Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.012608 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" podUID="7c3ae0ab-72c7-4cad-a5af-7d0fd817d8ea" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:33 crc kubenswrapper[4931]: E0131 04:26:33.061078 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac09cac9_6450_4c5c_8b61_5a2c5b60e26b.slice/crio-be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.112031 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp96j\" (UniqueName: \"kubernetes.io/projected/c3f1936c-4896-4468-b5c3-958691b633b7-kube-api-access-jp96j\") pod \"redhat-operators-brnp5\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.112117 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-utilities\") pod \"redhat-operators-brnp5\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.112139 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-catalog-content\") pod \"redhat-operators-brnp5\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.113606 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-utilities\") pod \"redhat-operators-brnp5\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.114095 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-catalog-content\") pod \"redhat-operators-brnp5\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.153383 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp96j\" (UniqueName: \"kubernetes.io/projected/c3f1936c-4896-4468-b5c3-958691b633b7-kube-api-access-jp96j\") pod \"redhat-operators-brnp5\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.177402 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wpx9n"] Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.291630 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.422275 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvt8b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.422782 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvt8b" podUID="fd126257-f24f-4597-8e75-e6d1c24e8709" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.422333 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvt8b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.423573 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvt8b" podUID="fd126257-f24f-4597-8e75-e6d1c24e8709" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.566340 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brnp5"] Jan 31 04:26:33 crc kubenswrapper[4931]: W0131 04:26:33.605514 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3f1936c_4896_4468_b5c3_958691b633b7.slice/crio-f8735025ac4188aab7a41004864c014eb1afe8791c4ad73c31bcc4b3fe9bdfc5 WatchSource:0}: Error finding container f8735025ac4188aab7a41004864c014eb1afe8791c4ad73c31bcc4b3fe9bdfc5: Status 404 returned error can't find the container with id f8735025ac4188aab7a41004864c014eb1afe8791c4ad73c31bcc4b3fe9bdfc5 Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.746981 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.751291 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:33 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:33 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:33 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.751376 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.808994 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brnp5" event={"ID":"c3f1936c-4896-4468-b5c3-958691b633b7","Type":"ContainerStarted","Data":"f8735025ac4188aab7a41004864c014eb1afe8791c4ad73c31bcc4b3fe9bdfc5"} Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.842986 4931 generic.go:334] "Generic (PLEG): container finished" podID="a2e61b47-71de-4eff-a485-7ace762bad74" containerID="96cfe31accacb49bb2bef9df2f41ffa06c20b4944d90d3aa185cf3da94cfea3d" exitCode=0 Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.844168 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpx9n" event={"ID":"a2e61b47-71de-4eff-a485-7ace762bad74","Type":"ContainerDied","Data":"96cfe31accacb49bb2bef9df2f41ffa06c20b4944d90d3aa185cf3da94cfea3d"} Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.844205 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpx9n" event={"ID":"a2e61b47-71de-4eff-a485-7ace762bad74","Type":"ContainerStarted","Data":"e2eca579d1a1f74da3abe7257997080e2d2b342e7b0910eca90e5e5fd8902f00"} Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.893332 4931 generic.go:334] "Generic (PLEG): container finished" podID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerID="be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb" exitCode=0 Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.893658 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxzrv" event={"ID":"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b","Type":"ContainerDied","Data":"be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb"} Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.894883 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.921984 4931 generic.go:334] "Generic (PLEG): container finished" podID="6afef2d1-159a-42d8-a30b-93affe8e2e00" containerID="c41a03c4934f28d6219005714ec5c47020cdcf55dd7b9400073818962bb92647" exitCode=0 Jan 31 04:26:33 crc kubenswrapper[4931]: I0131 04:26:33.931938 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6afef2d1-159a-42d8-a30b-93affe8e2e00","Type":"ContainerDied","Data":"c41a03c4934f28d6219005714ec5c47020cdcf55dd7b9400073818962bb92647"} Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.136236 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.137796 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.145050 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.145299 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.181569 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.238144 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7362cc-bce8-4853-8c78-d01daee0a412-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c7362cc-bce8-4853-8c78-d01daee0a412\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.238226 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7362cc-bce8-4853-8c78-d01daee0a412-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c7362cc-bce8-4853-8c78-d01daee0a412\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.314340 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.344787 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7362cc-bce8-4853-8c78-d01daee0a412-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c7362cc-bce8-4853-8c78-d01daee0a412\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.344885 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7362cc-bce8-4853-8c78-d01daee0a412-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c7362cc-bce8-4853-8c78-d01daee0a412\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.345372 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7362cc-bce8-4853-8c78-d01daee0a412-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c7362cc-bce8-4853-8c78-d01daee0a412\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.387605 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7362cc-bce8-4853-8c78-d01daee0a412-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c7362cc-bce8-4853-8c78-d01daee0a412\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.449147 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7595206-8944-4009-bcd7-f9952d225277-config-volume\") pod \"d7595206-8944-4009-bcd7-f9952d225277\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.449244 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7595206-8944-4009-bcd7-f9952d225277-secret-volume\") pod \"d7595206-8944-4009-bcd7-f9952d225277\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.449272 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cml4\" (UniqueName: \"kubernetes.io/projected/d7595206-8944-4009-bcd7-f9952d225277-kube-api-access-8cml4\") pod \"d7595206-8944-4009-bcd7-f9952d225277\" (UID: \"d7595206-8944-4009-bcd7-f9952d225277\") " Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.450078 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7595206-8944-4009-bcd7-f9952d225277-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7595206-8944-4009-bcd7-f9952d225277" (UID: "d7595206-8944-4009-bcd7-f9952d225277"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.462593 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7595206-8944-4009-bcd7-f9952d225277-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7595206-8944-4009-bcd7-f9952d225277" (UID: "d7595206-8944-4009-bcd7-f9952d225277"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.462965 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.463251 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7595206-8944-4009-bcd7-f9952d225277-kube-api-access-8cml4" (OuterVolumeSpecName: "kube-api-access-8cml4") pod "d7595206-8944-4009-bcd7-f9952d225277" (UID: "d7595206-8944-4009-bcd7-f9952d225277"). InnerVolumeSpecName "kube-api-access-8cml4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.551164 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7595206-8944-4009-bcd7-f9952d225277-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.551251 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7595206-8944-4009-bcd7-f9952d225277-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.551288 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cml4\" (UniqueName: \"kubernetes.io/projected/d7595206-8944-4009-bcd7-f9952d225277-kube-api-access-8cml4\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.750561 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:34 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:34 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:34 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.751117 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.947470 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.963451 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3f1936c-4896-4468-b5c3-958691b633b7" containerID="99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8" exitCode=0 Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.963530 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brnp5" event={"ID":"c3f1936c-4896-4468-b5c3-958691b633b7","Type":"ContainerDied","Data":"99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8"} Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.972341 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.974861 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm" event={"ID":"d7595206-8944-4009-bcd7-f9952d225277","Type":"ContainerDied","Data":"57be4eabfa3f847a70ca8145f450baae6b7336490e14fab061ca5c014b236b8e"} Jan 31 04:26:34 crc kubenswrapper[4931]: I0131 04:26:34.974943 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57be4eabfa3f847a70ca8145f450baae6b7336490e14fab061ca5c014b236b8e" Jan 31 04:26:34 crc kubenswrapper[4931]: W0131 04:26:34.998627 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1c7362cc_bce8_4853_8c78_d01daee0a412.slice/crio-e3212f6097f27af305508217dbb1fd1bdea61bc10535ae8271cfa9f23f39a1ab WatchSource:0}: Error finding container e3212f6097f27af305508217dbb1fd1bdea61bc10535ae8271cfa9f23f39a1ab: Status 404 returned error can't find the container with id e3212f6097f27af305508217dbb1fd1bdea61bc10535ae8271cfa9f23f39a1ab Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.528340 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.676764 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6afef2d1-159a-42d8-a30b-93affe8e2e00-kubelet-dir\") pod \"6afef2d1-159a-42d8-a30b-93affe8e2e00\" (UID: \"6afef2d1-159a-42d8-a30b-93affe8e2e00\") " Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.676869 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6afef2d1-159a-42d8-a30b-93affe8e2e00-kube-api-access\") pod \"6afef2d1-159a-42d8-a30b-93affe8e2e00\" (UID: \"6afef2d1-159a-42d8-a30b-93affe8e2e00\") " Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.676920 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6afef2d1-159a-42d8-a30b-93affe8e2e00-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6afef2d1-159a-42d8-a30b-93affe8e2e00" (UID: "6afef2d1-159a-42d8-a30b-93affe8e2e00"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.677531 4931 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6afef2d1-159a-42d8-a30b-93affe8e2e00-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.683555 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6afef2d1-159a-42d8-a30b-93affe8e2e00-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6afef2d1-159a-42d8-a30b-93affe8e2e00" (UID: "6afef2d1-159a-42d8-a30b-93affe8e2e00"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.749768 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:35 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:35 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:35 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.749862 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.779476 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6afef2d1-159a-42d8-a30b-93affe8e2e00-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.995841 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c7362cc-bce8-4853-8c78-d01daee0a412","Type":"ContainerStarted","Data":"8ab4832aa82dc0c14d346fbbb0b511da0f609fa1904faaa3cd299af8258fad5f"} Jan 31 04:26:35 crc kubenswrapper[4931]: I0131 04:26:35.996215 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c7362cc-bce8-4853-8c78-d01daee0a412","Type":"ContainerStarted","Data":"e3212f6097f27af305508217dbb1fd1bdea61bc10535ae8271cfa9f23f39a1ab"} Jan 31 04:26:36 crc kubenswrapper[4931]: I0131 04:26:36.000227 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6afef2d1-159a-42d8-a30b-93affe8e2e00","Type":"ContainerDied","Data":"dfdfc803e0fd6d3f4beea5611332fd009e957e3f92df697101af6ac0fc410ad0"} Jan 31 04:26:36 crc kubenswrapper[4931]: I0131 04:26:36.000262 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfdfc803e0fd6d3f4beea5611332fd009e957e3f92df697101af6ac0fc410ad0" Jan 31 04:26:36 crc kubenswrapper[4931]: I0131 04:26:36.000331 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:26:36 crc kubenswrapper[4931]: I0131 04:26:36.021153 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.021135704 podStartE2EDuration="2.021135704s" podCreationTimestamp="2026-01-31 04:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:26:36.010792752 +0000 UTC m=+154.820021626" watchObservedRunningTime="2026-01-31 04:26:36.021135704 +0000 UTC m=+154.830364578" Jan 31 04:26:36 crc kubenswrapper[4931]: I0131 04:26:36.748284 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:36 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:36 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:36 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:36 crc kubenswrapper[4931]: I0131 04:26:36.748356 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:37 crc kubenswrapper[4931]: I0131 04:26:37.030276 4931 generic.go:334] "Generic (PLEG): container finished" podID="1c7362cc-bce8-4853-8c78-d01daee0a412" containerID="8ab4832aa82dc0c14d346fbbb0b511da0f609fa1904faaa3cd299af8258fad5f" exitCode=0 Jan 31 04:26:37 crc kubenswrapper[4931]: I0131 04:26:37.030327 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c7362cc-bce8-4853-8c78-d01daee0a412","Type":"ContainerDied","Data":"8ab4832aa82dc0c14d346fbbb0b511da0f609fa1904faaa3cd299af8258fad5f"} Jan 31 04:26:37 crc kubenswrapper[4931]: I0131 04:26:37.749015 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:37 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:37 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:37 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:37 crc kubenswrapper[4931]: I0131 04:26:37.749105 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:37 crc kubenswrapper[4931]: I0131 04:26:37.959047 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:37 crc kubenswrapper[4931]: I0131 04:26:37.964280 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bsjw7" Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.576628 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.625248 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7362cc-bce8-4853-8c78-d01daee0a412-kube-api-access\") pod \"1c7362cc-bce8-4853-8c78-d01daee0a412\" (UID: \"1c7362cc-bce8-4853-8c78-d01daee0a412\") " Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.625354 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7362cc-bce8-4853-8c78-d01daee0a412-kubelet-dir\") pod \"1c7362cc-bce8-4853-8c78-d01daee0a412\" (UID: \"1c7362cc-bce8-4853-8c78-d01daee0a412\") " Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.625913 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c7362cc-bce8-4853-8c78-d01daee0a412-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c7362cc-bce8-4853-8c78-d01daee0a412" (UID: "1c7362cc-bce8-4853-8c78-d01daee0a412"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.634736 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7362cc-bce8-4853-8c78-d01daee0a412-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c7362cc-bce8-4853-8c78-d01daee0a412" (UID: "1c7362cc-bce8-4853-8c78-d01daee0a412"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.727448 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7362cc-bce8-4853-8c78-d01daee0a412-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.727479 4931 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7362cc-bce8-4853-8c78-d01daee0a412-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.755313 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:38 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:38 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:38 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.755368 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:38 crc kubenswrapper[4931]: I0131 04:26:38.910684 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5g5dr" Jan 31 04:26:39 crc kubenswrapper[4931]: I0131 04:26:39.100305 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c7362cc-bce8-4853-8c78-d01daee0a412","Type":"ContainerDied","Data":"e3212f6097f27af305508217dbb1fd1bdea61bc10535ae8271cfa9f23f39a1ab"} Jan 31 04:26:39 crc kubenswrapper[4931]: I0131 04:26:39.100346 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3212f6097f27af305508217dbb1fd1bdea61bc10535ae8271cfa9f23f39a1ab" Jan 31 04:26:39 crc kubenswrapper[4931]: I0131 04:26:39.100404 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:26:39 crc kubenswrapper[4931]: I0131 04:26:39.751150 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:39 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:39 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:39 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:39 crc kubenswrapper[4931]: I0131 04:26:39.751243 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:40 crc kubenswrapper[4931]: I0131 04:26:40.747994 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:40 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:40 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:40 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:40 crc kubenswrapper[4931]: I0131 04:26:40.748054 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:41 crc kubenswrapper[4931]: I0131 04:26:41.748239 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:41 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:41 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:41 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:41 crc kubenswrapper[4931]: I0131 04:26:41.748302 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:42 crc kubenswrapper[4931]: I0131 04:26:42.749059 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:42 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:42 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:42 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:42 crc kubenswrapper[4931]: I0131 04:26:42.749375 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:42 crc kubenswrapper[4931]: I0131 04:26:42.956614 4931 patch_prober.go:28] interesting pod/console-f9d7485db-8hh7p container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 04:26:42 crc kubenswrapper[4931]: I0131 04:26:42.956677 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8hh7p" podUID="d8f9e4a7-6d0d-41c9-bc46-f93fdefb2048" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 04:26:43 crc kubenswrapper[4931]: I0131 04:26:43.428194 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jvt8b" Jan 31 04:26:43 crc kubenswrapper[4931]: I0131 04:26:43.749147 4931 patch_prober.go:28] interesting pod/router-default-5444994796-s5dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:26:43 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 31 04:26:43 crc kubenswrapper[4931]: [+]process-running ok Jan 31 04:26:43 crc kubenswrapper[4931]: healthz check failed Jan 31 04:26:43 crc kubenswrapper[4931]: I0131 04:26:43.749220 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5dt7" podUID="fc0e3a20-b429-47e1-8100-aa1fae313bf7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:26:44 crc kubenswrapper[4931]: I0131 04:26:44.017457 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:44 crc kubenswrapper[4931]: I0131 04:26:44.022955 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342-metrics-certs\") pod \"network-metrics-daemon-4cc6z\" (UID: \"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342\") " pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:44 crc kubenswrapper[4931]: I0131 04:26:44.216511 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4cc6z" Jan 31 04:26:44 crc kubenswrapper[4931]: I0131 04:26:44.757699 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:44 crc kubenswrapper[4931]: I0131 04:26:44.762404 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-s5dt7" Jan 31 04:26:46 crc kubenswrapper[4931]: I0131 04:26:46.167412 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4cc6z"] Jan 31 04:26:51 crc kubenswrapper[4931]: I0131 04:26:51.133487 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:26:51 crc kubenswrapper[4931]: I0131 04:26:51.134037 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:26:51 crc kubenswrapper[4931]: I0131 04:26:51.175140 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" event={"ID":"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342","Type":"ContainerStarted","Data":"93f6442590422306451a6c6f9c70d57bd8519c26033ffb9065c35394d9860f6c"} Jan 31 04:26:51 crc kubenswrapper[4931]: I0131 04:26:51.512910 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:26:52 crc kubenswrapper[4931]: I0131 04:26:52.972999 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:26:52 crc kubenswrapper[4931]: I0131 04:26:52.976999 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8hh7p" Jan 31 04:27:03 crc kubenswrapper[4931]: I0131 04:27:03.558336 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh9j9" Jan 31 04:27:08 crc kubenswrapper[4931]: I0131 04:27:08.867463 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.130567 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 04:27:10 crc kubenswrapper[4931]: E0131 04:27:10.130813 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7595206-8944-4009-bcd7-f9952d225277" containerName="collect-profiles" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.130825 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7595206-8944-4009-bcd7-f9952d225277" containerName="collect-profiles" Jan 31 04:27:10 crc kubenswrapper[4931]: E0131 04:27:10.130842 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6afef2d1-159a-42d8-a30b-93affe8e2e00" containerName="pruner" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.130848 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afef2d1-159a-42d8-a30b-93affe8e2e00" containerName="pruner" Jan 31 04:27:10 crc kubenswrapper[4931]: E0131 04:27:10.130861 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7362cc-bce8-4853-8c78-d01daee0a412" containerName="pruner" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.130867 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7362cc-bce8-4853-8c78-d01daee0a412" containerName="pruner" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.130958 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6afef2d1-159a-42d8-a30b-93affe8e2e00" containerName="pruner" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.130966 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7595206-8944-4009-bcd7-f9952d225277" containerName="collect-profiles" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.130980 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7362cc-bce8-4853-8c78-d01daee0a412" containerName="pruner" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.131334 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.136482 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.136613 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.143511 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.286863 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.286929 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.388077 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.388163 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.388588 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.407765 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:10 crc kubenswrapper[4931]: I0131 04:27:10.454646 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:11 crc kubenswrapper[4931]: E0131 04:27:11.344969 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 04:27:11 crc kubenswrapper[4931]: E0131 04:27:11.345136 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4mrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wpx9n_openshift-marketplace(a2e61b47-71de-4eff-a485-7ace762bad74): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:27:11 crc kubenswrapper[4931]: E0131 04:27:11.346316 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wpx9n" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.533872 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.538316 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.544232 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.570617 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.570911 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-var-lock\") pod \"installer-9-crc\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.571071 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8699b4d0-215a-4648-979d-693eaff910d7-kube-api-access\") pod \"installer-9-crc\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.671778 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.671829 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-var-lock\") pod \"installer-9-crc\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.671865 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8699b4d0-215a-4648-979d-693eaff910d7-kube-api-access\") pod \"installer-9-crc\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.672645 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.672775 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-var-lock\") pod \"installer-9-crc\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.692156 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8699b4d0-215a-4648-979d-693eaff910d7-kube-api-access\") pod \"installer-9-crc\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:14 crc kubenswrapper[4931]: I0131 04:27:14.871186 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:16 crc kubenswrapper[4931]: E0131 04:27:16.508007 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wpx9n" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" Jan 31 04:27:16 crc kubenswrapper[4931]: E0131 04:27:16.866424 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 04:27:16 crc kubenswrapper[4931]: E0131 04:27:16.866621 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hmjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sxzrv_openshift-marketplace(ac09cac9-6450-4c5c-8b61-5a2c5b60e26b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:27:16 crc kubenswrapper[4931]: E0131 04:27:16.867932 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-sxzrv" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" Jan 31 04:27:17 crc kubenswrapper[4931]: E0131 04:27:17.069053 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 04:27:17 crc kubenswrapper[4931]: E0131 04:27:17.069251 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6p6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jmxsb_openshift-marketplace(2bff2dda-54db-4a36-a7a5-af34cf3367dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:27:17 crc kubenswrapper[4931]: E0131 04:27:17.071028 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jmxsb" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" Jan 31 04:27:18 crc kubenswrapper[4931]: E0131 04:27:18.217214 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jmxsb" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" Jan 31 04:27:18 crc kubenswrapper[4931]: E0131 04:27:18.217251 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sxzrv" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" Jan 31 04:27:18 crc kubenswrapper[4931]: E0131 04:27:18.511604 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 04:27:18 crc kubenswrapper[4931]: E0131 04:27:18.511921 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2zf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nnmk2_openshift-marketplace(909dcbcc-c48a-4180-8b14-524e5839eaef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:27:18 crc kubenswrapper[4931]: E0131 04:27:18.513137 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nnmk2" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" Jan 31 04:27:18 crc kubenswrapper[4931]: I0131 04:27:18.612763 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 04:27:18 crc kubenswrapper[4931]: E0131 04:27:18.734987 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 04:27:18 crc kubenswrapper[4931]: E0131 04:27:18.735177 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmhjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2h5z9_openshift-marketplace(6fd4e055-c0fd-4afb-873d-9920d5765466): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:27:18 crc kubenswrapper[4931]: E0131 04:27:18.736346 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2h5z9" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" Jan 31 04:27:18 crc kubenswrapper[4931]: I0131 04:27:18.747882 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 04:27:19 crc kubenswrapper[4931]: E0131 04:27:19.176631 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 04:27:19 crc kubenswrapper[4931]: E0131 04:27:19.176828 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmtj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-62qpd_openshift-marketplace(73bfe314-a35b-4a93-b1cb-f1f7a4b2756c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:27:19 crc kubenswrapper[4931]: E0131 04:27:19.177992 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-62qpd" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.326751 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" event={"ID":"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342","Type":"ContainerStarted","Data":"4367cc1381afb8b9071b19bebd2df758327d8cc40e3a6c2e254b1dc01e016d7c"} Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.326807 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4cc6z" event={"ID":"df3b0cb6-a55b-4e12-9cfa-ffd42d1c0342","Type":"ContainerStarted","Data":"c8495adbe34dcc78a3db6df1d3eac08019b29392246e1c157099c63be3fafa1e"} Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.327648 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d4f1f3ea-dc0b-476a-b5ef-80861af5de02","Type":"ContainerStarted","Data":"163bdff972a6a2d4f146df893e45beed94dda98a69e553f986ae8c37a6c764c3"} Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.327681 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d4f1f3ea-dc0b-476a-b5ef-80861af5de02","Type":"ContainerStarted","Data":"465f6eb776c2b6ea25cf95d2b3a74bb7de99726bb697f0400d280bab58ac6ae5"} Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.329144 4931 generic.go:334] "Generic (PLEG): container finished" podID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerID="57f65a19f7fc06e250f10b159ddd7a64afbdbf2fdb25cb00ac56f9e018b8b688" exitCode=0 Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.329223 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfs9t" event={"ID":"471d6f3b-ab9a-414a-b14f-d719d2d5e96a","Type":"ContainerDied","Data":"57f65a19f7fc06e250f10b159ddd7a64afbdbf2fdb25cb00ac56f9e018b8b688"} Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.331687 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8699b4d0-215a-4648-979d-693eaff910d7","Type":"ContainerStarted","Data":"14b2f7abc0340adfdf48b6061edcf699903db8b5a57b41d654aacf27c514a7a6"} Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.331738 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8699b4d0-215a-4648-979d-693eaff910d7","Type":"ContainerStarted","Data":"8048e23ed5c003bb2f66cd5885d9e01c73198b185e49aed0513a252edb8f63ba"} Jan 31 04:27:19 crc kubenswrapper[4931]: E0131 04:27:19.332541 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nnmk2" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" Jan 31 04:27:19 crc kubenswrapper[4931]: E0131 04:27:19.337989 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-62qpd" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" Jan 31 04:27:19 crc kubenswrapper[4931]: E0131 04:27:19.338108 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2h5z9" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.353010 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4cc6z" podStartSLOduration=177.352987116 podStartE2EDuration="2m57.352987116s" podCreationTimestamp="2026-01-31 04:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:27:19.349378704 +0000 UTC m=+198.158607578" watchObservedRunningTime="2026-01-31 04:27:19.352987116 +0000 UTC m=+198.162215990" Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.393680 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.39365509 podStartE2EDuration="5.39365509s" podCreationTimestamp="2026-01-31 04:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:27:19.391241431 +0000 UTC m=+198.200470325" watchObservedRunningTime="2026-01-31 04:27:19.39365509 +0000 UTC m=+198.202883964" Jan 31 04:27:19 crc kubenswrapper[4931]: I0131 04:27:19.429622 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=9.42960422 podStartE2EDuration="9.42960422s" podCreationTimestamp="2026-01-31 04:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:27:19.412869395 +0000 UTC m=+198.222098269" watchObservedRunningTime="2026-01-31 04:27:19.42960422 +0000 UTC m=+198.238833084" Jan 31 04:27:19 crc kubenswrapper[4931]: E0131 04:27:19.485775 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 04:27:19 crc kubenswrapper[4931]: E0131 04:27:19.485917 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp96j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-brnp5_openshift-marketplace(c3f1936c-4896-4468-b5c3-958691b633b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:27:19 crc kubenswrapper[4931]: E0131 04:27:19.487133 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-brnp5" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" Jan 31 04:27:20 crc kubenswrapper[4931]: I0131 04:27:20.338637 4931 generic.go:334] "Generic (PLEG): container finished" podID="d4f1f3ea-dc0b-476a-b5ef-80861af5de02" containerID="163bdff972a6a2d4f146df893e45beed94dda98a69e553f986ae8c37a6c764c3" exitCode=0 Jan 31 04:27:20 crc kubenswrapper[4931]: I0131 04:27:20.338759 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d4f1f3ea-dc0b-476a-b5ef-80861af5de02","Type":"ContainerDied","Data":"163bdff972a6a2d4f146df893e45beed94dda98a69e553f986ae8c37a6c764c3"} Jan 31 04:27:20 crc kubenswrapper[4931]: E0131 04:27:20.345698 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-brnp5" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.133576 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.134102 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.349305 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfs9t" event={"ID":"471d6f3b-ab9a-414a-b14f-d719d2d5e96a","Type":"ContainerStarted","Data":"571cc45d42c9095da3bfe4102898f92ca604405d362d0bda47c8877f31479363"} Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.371543 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfs9t" podStartSLOduration=3.516818147 podStartE2EDuration="52.371525049s" podCreationTimestamp="2026-01-31 04:26:29 +0000 UTC" firstStartedPulling="2026-01-31 04:26:31.469809315 +0000 UTC m=+150.279038179" lastFinishedPulling="2026-01-31 04:27:20.324516177 +0000 UTC m=+199.133745081" observedRunningTime="2026-01-31 04:27:21.366145802 +0000 UTC m=+200.175374716" watchObservedRunningTime="2026-01-31 04:27:21.371525049 +0000 UTC m=+200.180753913" Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.667672 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.765448 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kubelet-dir\") pod \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\" (UID: \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\") " Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.765624 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d4f1f3ea-dc0b-476a-b5ef-80861af5de02" (UID: "d4f1f3ea-dc0b-476a-b5ef-80861af5de02"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.765641 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kube-api-access\") pod \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\" (UID: \"d4f1f3ea-dc0b-476a-b5ef-80861af5de02\") " Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.765983 4931 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.773184 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d4f1f3ea-dc0b-476a-b5ef-80861af5de02" (UID: "d4f1f3ea-dc0b-476a-b5ef-80861af5de02"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:27:21 crc kubenswrapper[4931]: I0131 04:27:21.867396 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f1f3ea-dc0b-476a-b5ef-80861af5de02-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:22 crc kubenswrapper[4931]: I0131 04:27:22.355947 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:27:22 crc kubenswrapper[4931]: I0131 04:27:22.355956 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d4f1f3ea-dc0b-476a-b5ef-80861af5de02","Type":"ContainerDied","Data":"465f6eb776c2b6ea25cf95d2b3a74bb7de99726bb697f0400d280bab58ac6ae5"} Jan 31 04:27:22 crc kubenswrapper[4931]: I0131 04:27:22.356015 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="465f6eb776c2b6ea25cf95d2b3a74bb7de99726bb697f0400d280bab58ac6ae5" Jan 31 04:27:29 crc kubenswrapper[4931]: I0131 04:27:29.976108 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:27:29 crc kubenswrapper[4931]: I0131 04:27:29.976629 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:27:30 crc kubenswrapper[4931]: I0131 04:27:30.452283 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:27:30 crc kubenswrapper[4931]: I0131 04:27:30.492697 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:27:34 crc kubenswrapper[4931]: I0131 04:27:34.429191 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpx9n" event={"ID":"a2e61b47-71de-4eff-a485-7ace762bad74","Type":"ContainerStarted","Data":"08edec43a83436aa01234bb4d3fad4226e5a88b3088990919a732207d29e619c"} Jan 31 04:27:34 crc kubenswrapper[4931]: I0131 04:27:34.432604 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxzrv" event={"ID":"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b","Type":"ContainerStarted","Data":"4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593"} Jan 31 04:27:35 crc kubenswrapper[4931]: I0131 04:27:35.253666 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bmrsg"] Jan 31 04:27:35 crc kubenswrapper[4931]: I0131 04:27:35.441677 4931 generic.go:334] "Generic (PLEG): container finished" podID="a2e61b47-71de-4eff-a485-7ace762bad74" containerID="08edec43a83436aa01234bb4d3fad4226e5a88b3088990919a732207d29e619c" exitCode=0 Jan 31 04:27:35 crc kubenswrapper[4931]: I0131 04:27:35.442325 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpx9n" event={"ID":"a2e61b47-71de-4eff-a485-7ace762bad74","Type":"ContainerDied","Data":"08edec43a83436aa01234bb4d3fad4226e5a88b3088990919a732207d29e619c"} Jan 31 04:27:35 crc kubenswrapper[4931]: I0131 04:27:35.445369 4931 generic.go:334] "Generic (PLEG): container finished" podID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerID="4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593" exitCode=0 Jan 31 04:27:35 crc kubenswrapper[4931]: I0131 04:27:35.445403 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxzrv" event={"ID":"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b","Type":"ContainerDied","Data":"4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593"} Jan 31 04:27:36 crc kubenswrapper[4931]: I0131 04:27:36.451789 4931 generic.go:334] "Generic (PLEG): container finished" podID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerID="38e6badf128515e384543841e1ed6c7d7de678e4ac47f626f4f213d6eea0a706" exitCode=0 Jan 31 04:27:36 crc kubenswrapper[4931]: I0131 04:27:36.451830 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmxsb" event={"ID":"2bff2dda-54db-4a36-a7a5-af34cf3367dc","Type":"ContainerDied","Data":"38e6badf128515e384543841e1ed6c7d7de678e4ac47f626f4f213d6eea0a706"} Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.473501 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brnp5" event={"ID":"c3f1936c-4896-4468-b5c3-958691b633b7","Type":"ContainerStarted","Data":"91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c"} Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.479674 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpx9n" event={"ID":"a2e61b47-71de-4eff-a485-7ace762bad74","Type":"ContainerStarted","Data":"8ce5632c2824db6fb1e2484b806fa5bb36ec7e90427d98302b1a074fc20e1e70"} Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.482301 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmxsb" event={"ID":"2bff2dda-54db-4a36-a7a5-af34cf3367dc","Type":"ContainerStarted","Data":"234befd7655a7a7aaed306a0b21c774079633c1f2c7fe2e9df5b6eb04959ad8b"} Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.487180 4931 generic.go:334] "Generic (PLEG): container finished" podID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerID="602c87e45487ddb3d76e17726d55932e42760eb1791fff92f1c7367f79383a4b" exitCode=0 Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.487233 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h5z9" event={"ID":"6fd4e055-c0fd-4afb-873d-9920d5765466","Type":"ContainerDied","Data":"602c87e45487ddb3d76e17726d55932e42760eb1791fff92f1c7367f79383a4b"} Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.493970 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxzrv" event={"ID":"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b","Type":"ContainerStarted","Data":"c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c"} Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.497346 4931 generic.go:334] "Generic (PLEG): container finished" podID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerID="7f01103274f324f492d5a649df57801e008b968d84687e8992b7950929d8a91e" exitCode=0 Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.497457 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62qpd" event={"ID":"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c","Type":"ContainerDied","Data":"7f01103274f324f492d5a649df57801e008b968d84687e8992b7950929d8a91e"} Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.500062 4931 generic.go:334] "Generic (PLEG): container finished" podID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerID="ac26326d7a46e655f40b02697945944f2aa33fef304a08bc470d6ded36605bc4" exitCode=0 Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.500117 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmk2" event={"ID":"909dcbcc-c48a-4180-8b14-524e5839eaef","Type":"ContainerDied","Data":"ac26326d7a46e655f40b02697945944f2aa33fef304a08bc470d6ded36605bc4"} Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.518580 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sxzrv" podStartSLOduration=3.771165866 podStartE2EDuration="1m8.518561553s" podCreationTimestamp="2026-01-31 04:26:31 +0000 UTC" firstStartedPulling="2026-01-31 04:26:33.903559622 +0000 UTC m=+152.712788496" lastFinishedPulling="2026-01-31 04:27:38.650955309 +0000 UTC m=+217.460184183" observedRunningTime="2026-01-31 04:27:39.51629967 +0000 UTC m=+218.325528544" watchObservedRunningTime="2026-01-31 04:27:39.518561553 +0000 UTC m=+218.327790427" Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.536796 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jmxsb" podStartSLOduration=2.467973319 podStartE2EDuration="1m8.536775392s" podCreationTimestamp="2026-01-31 04:26:31 +0000 UTC" firstStartedPulling="2026-01-31 04:26:32.727864058 +0000 UTC m=+151.537092922" lastFinishedPulling="2026-01-31 04:27:38.796666121 +0000 UTC m=+217.605894995" observedRunningTime="2026-01-31 04:27:39.533080106 +0000 UTC m=+218.342308980" watchObservedRunningTime="2026-01-31 04:27:39.536775392 +0000 UTC m=+218.346004256" Jan 31 04:27:39 crc kubenswrapper[4931]: I0131 04:27:39.563937 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wpx9n" podStartSLOduration=2.75759707 podStartE2EDuration="1m7.563920869s" podCreationTimestamp="2026-01-31 04:26:32 +0000 UTC" firstStartedPulling="2026-01-31 04:26:33.845822374 +0000 UTC m=+152.655051248" lastFinishedPulling="2026-01-31 04:27:38.652146173 +0000 UTC m=+217.461375047" observedRunningTime="2026-01-31 04:27:39.561540371 +0000 UTC m=+218.370769245" watchObservedRunningTime="2026-01-31 04:27:39.563920869 +0000 UTC m=+218.373149743" Jan 31 04:27:40 crc kubenswrapper[4931]: I0131 04:27:40.509896 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3f1936c-4896-4468-b5c3-958691b633b7" containerID="91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c" exitCode=0 Jan 31 04:27:40 crc kubenswrapper[4931]: I0131 04:27:40.509978 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brnp5" event={"ID":"c3f1936c-4896-4468-b5c3-958691b633b7","Type":"ContainerDied","Data":"91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c"} Jan 31 04:27:41 crc kubenswrapper[4931]: I0131 04:27:41.518646 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h5z9" event={"ID":"6fd4e055-c0fd-4afb-873d-9920d5765466","Type":"ContainerStarted","Data":"10ff3262489b5910235604a2ea9b45d585275d72194510c26fe8a8e3a3974227"} Jan 31 04:27:41 crc kubenswrapper[4931]: I0131 04:27:41.925344 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:27:41 crc kubenswrapper[4931]: I0131 04:27:41.925498 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:27:41 crc kubenswrapper[4931]: I0131 04:27:41.983236 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.001617 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2h5z9" podStartSLOduration=4.514810963 podStartE2EDuration="1m13.001597384s" podCreationTimestamp="2026-01-31 04:26:29 +0000 UTC" firstStartedPulling="2026-01-31 04:26:32.742461308 +0000 UTC m=+151.551690182" lastFinishedPulling="2026-01-31 04:27:41.229247719 +0000 UTC m=+220.038476603" observedRunningTime="2026-01-31 04:27:41.539793544 +0000 UTC m=+220.349022418" watchObservedRunningTime="2026-01-31 04:27:42.001597384 +0000 UTC m=+220.810826258" Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.255638 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.256161 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.305813 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.527619 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brnp5" event={"ID":"c3f1936c-4896-4468-b5c3-958691b633b7","Type":"ContainerStarted","Data":"83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7"} Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.531441 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62qpd" event={"ID":"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c","Type":"ContainerStarted","Data":"baf4221704c0294db56f36ab1def54562647b8e0eded3f83ec2eac2ead8884a7"} Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.533559 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmk2" event={"ID":"909dcbcc-c48a-4180-8b14-524e5839eaef","Type":"ContainerStarted","Data":"f9a19938b90919054b2b993665631d062605153eaa98909e8859a930c980477f"} Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.552324 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-brnp5" podStartSLOduration=3.845577493 podStartE2EDuration="1m10.552306779s" podCreationTimestamp="2026-01-31 04:26:32 +0000 UTC" firstStartedPulling="2026-01-31 04:26:34.968537303 +0000 UTC m=+153.777766177" lastFinishedPulling="2026-01-31 04:27:41.675266589 +0000 UTC m=+220.484495463" observedRunningTime="2026-01-31 04:27:42.551307983 +0000 UTC m=+221.360536867" watchObservedRunningTime="2026-01-31 04:27:42.552306779 +0000 UTC m=+221.361535653" Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.576056 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nnmk2" podStartSLOduration=3.595435013 podStartE2EDuration="1m13.576036271s" podCreationTimestamp="2026-01-31 04:26:29 +0000 UTC" firstStartedPulling="2026-01-31 04:26:31.38449283 +0000 UTC m=+150.193721704" lastFinishedPulling="2026-01-31 04:27:41.365094088 +0000 UTC m=+220.174322962" observedRunningTime="2026-01-31 04:27:42.57302108 +0000 UTC m=+221.382249954" watchObservedRunningTime="2026-01-31 04:27:42.576036271 +0000 UTC m=+221.385265145" Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.592682 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62qpd" podStartSLOduration=4.957116537 podStartE2EDuration="1m13.592662282s" podCreationTimestamp="2026-01-31 04:26:29 +0000 UTC" firstStartedPulling="2026-01-31 04:26:32.70959842 +0000 UTC m=+151.518827294" lastFinishedPulling="2026-01-31 04:27:41.345144175 +0000 UTC m=+220.154373039" observedRunningTime="2026-01-31 04:27:42.590875936 +0000 UTC m=+221.400104810" watchObservedRunningTime="2026-01-31 04:27:42.592662282 +0000 UTC m=+221.401891156" Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.842228 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:27:42 crc kubenswrapper[4931]: I0131 04:27:42.842952 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:27:43 crc kubenswrapper[4931]: I0131 04:27:43.293276 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:27:43 crc kubenswrapper[4931]: I0131 04:27:43.293452 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:27:43 crc kubenswrapper[4931]: I0131 04:27:43.582001 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:27:43 crc kubenswrapper[4931]: I0131 04:27:43.889697 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wpx9n" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" containerName="registry-server" probeResult="failure" output=< Jan 31 04:27:43 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 31 04:27:43 crc kubenswrapper[4931]: > Jan 31 04:27:44 crc kubenswrapper[4931]: I0131 04:27:44.147799 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxzrv"] Jan 31 04:27:44 crc kubenswrapper[4931]: I0131 04:27:44.335925 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-brnp5" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" containerName="registry-server" probeResult="failure" output=< Jan 31 04:27:44 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 31 04:27:44 crc kubenswrapper[4931]: > Jan 31 04:27:45 crc kubenswrapper[4931]: I0131 04:27:45.552573 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sxzrv" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerName="registry-server" containerID="cri-o://c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c" gracePeriod=2 Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.521094 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.591488 4931 generic.go:334] "Generic (PLEG): container finished" podID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerID="c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c" exitCode=0 Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.591534 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxzrv" event={"ID":"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b","Type":"ContainerDied","Data":"c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c"} Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.591561 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxzrv" event={"ID":"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b","Type":"ContainerDied","Data":"59c5d537c4fb0735a142fe3b14baa0b5bf9b26e2282e1ca93d40b5caa6dee21f"} Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.591577 4931 scope.go:117] "RemoveContainer" containerID="c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.591741 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxzrv" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.607346 4931 scope.go:117] "RemoveContainer" containerID="4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.632778 4931 scope.go:117] "RemoveContainer" containerID="be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.639180 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-utilities\") pod \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.639285 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-catalog-content\") pod \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.639373 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hmjj\" (UniqueName: \"kubernetes.io/projected/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-kube-api-access-7hmjj\") pod \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\" (UID: \"ac09cac9-6450-4c5c-8b61-5a2c5b60e26b\") " Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.642096 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-utilities" (OuterVolumeSpecName: "utilities") pod "ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" (UID: "ac09cac9-6450-4c5c-8b61-5a2c5b60e26b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.650769 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-kube-api-access-7hmjj" (OuterVolumeSpecName: "kube-api-access-7hmjj") pod "ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" (UID: "ac09cac9-6450-4c5c-8b61-5a2c5b60e26b"). InnerVolumeSpecName "kube-api-access-7hmjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.659734 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.659791 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hmjj\" (UniqueName: \"kubernetes.io/projected/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-kube-api-access-7hmjj\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.673947 4931 scope.go:117] "RemoveContainer" containerID="c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c" Jan 31 04:27:47 crc kubenswrapper[4931]: E0131 04:27:47.674553 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c\": container with ID starting with c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c not found: ID does not exist" containerID="c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.674588 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c"} err="failed to get container status \"c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c\": rpc error: code = NotFound desc = could not find container \"c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c\": container with ID starting with c24c1e2d69351afcb485f3a795e9f968fd3c622c8683bac2cefdd1076a78fa0c not found: ID does not exist" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.674647 4931 scope.go:117] "RemoveContainer" containerID="4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593" Jan 31 04:27:47 crc kubenswrapper[4931]: E0131 04:27:47.677002 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593\": container with ID starting with 4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593 not found: ID does not exist" containerID="4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.677052 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593"} err="failed to get container status \"4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593\": rpc error: code = NotFound desc = could not find container \"4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593\": container with ID starting with 4175b5d311899dc92dc3a5170024805ff97f7d90ee2d7041dac0d3cc26a35593 not found: ID does not exist" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.677081 4931 scope.go:117] "RemoveContainer" containerID="be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb" Jan 31 04:27:47 crc kubenswrapper[4931]: E0131 04:27:47.677490 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb\": container with ID starting with be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb not found: ID does not exist" containerID="be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.677559 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb"} err="failed to get container status \"be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb\": rpc error: code = NotFound desc = could not find container \"be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb\": container with ID starting with be7ff901c264e53179333ab4a14fc71b50f0b20a97cfae22e2f16a4d455495fb not found: ID does not exist" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.685299 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" (UID: "ac09cac9-6450-4c5c-8b61-5a2c5b60e26b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.762059 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.931395 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxzrv"] Jan 31 04:27:47 crc kubenswrapper[4931]: I0131 04:27:47.939873 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxzrv"] Jan 31 04:27:49 crc kubenswrapper[4931]: I0131 04:27:49.818558 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:27:49 crc kubenswrapper[4931]: I0131 04:27:49.818620 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:27:49 crc kubenswrapper[4931]: I0131 04:27:49.867675 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:27:49 crc kubenswrapper[4931]: I0131 04:27:49.904282 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" path="/var/lib/kubelet/pods/ac09cac9-6450-4c5c-8b61-5a2c5b60e26b/volumes" Jan 31 04:27:50 crc kubenswrapper[4931]: I0131 04:27:50.125706 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:27:50 crc kubenswrapper[4931]: I0131 04:27:50.126889 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:27:50 crc kubenswrapper[4931]: I0131 04:27:50.167062 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:27:50 crc kubenswrapper[4931]: I0131 04:27:50.292987 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:27:50 crc kubenswrapper[4931]: I0131 04:27:50.293079 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:27:50 crc kubenswrapper[4931]: I0131 04:27:50.348184 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:27:50 crc kubenswrapper[4931]: I0131 04:27:50.644468 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:27:50 crc kubenswrapper[4931]: I0131 04:27:50.644525 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:27:50 crc kubenswrapper[4931]: I0131 04:27:50.654635 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:27:51 crc kubenswrapper[4931]: I0131 04:27:51.133172 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:27:51 crc kubenswrapper[4931]: I0131 04:27:51.133252 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:27:51 crc kubenswrapper[4931]: I0131 04:27:51.133319 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:27:51 crc kubenswrapper[4931]: I0131 04:27:51.134007 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:27:51 crc kubenswrapper[4931]: I0131 04:27:51.134084 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9" gracePeriod=600 Jan 31 04:27:51 crc kubenswrapper[4931]: I0131 04:27:51.964780 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:27:52 crc kubenswrapper[4931]: I0131 04:27:52.549476 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nnmk2"] Jan 31 04:27:52 crc kubenswrapper[4931]: I0131 04:27:52.622298 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9" exitCode=0 Jan 31 04:27:52 crc kubenswrapper[4931]: I0131 04:27:52.622365 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9"} Jan 31 04:27:52 crc kubenswrapper[4931]: I0131 04:27:52.894431 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:27:52 crc kubenswrapper[4931]: I0131 04:27:52.951061 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:27:53 crc kubenswrapper[4931]: I0131 04:27:53.341740 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:27:53 crc kubenswrapper[4931]: I0131 04:27:53.379104 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:27:53 crc kubenswrapper[4931]: I0131 04:27:53.629007 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nnmk2" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerName="registry-server" containerID="cri-o://f9a19938b90919054b2b993665631d062605153eaa98909e8859a930c980477f" gracePeriod=2 Jan 31 04:27:53 crc kubenswrapper[4931]: I0131 04:27:53.945957 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62qpd"] Jan 31 04:27:53 crc kubenswrapper[4931]: I0131 04:27:53.946203 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-62qpd" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerName="registry-server" containerID="cri-o://baf4221704c0294db56f36ab1def54562647b8e0eded3f83ec2eac2ead8884a7" gracePeriod=2 Jan 31 04:27:54 crc kubenswrapper[4931]: I0131 04:27:54.638323 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"bdf5eb257c3a81d0040b92141c1d8e85526b1c5ea3208409eec00a505ebefc2a"} Jan 31 04:27:54 crc kubenswrapper[4931]: I0131 04:27:54.642512 4931 generic.go:334] "Generic (PLEG): container finished" podID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerID="f9a19938b90919054b2b993665631d062605153eaa98909e8859a930c980477f" exitCode=0 Jan 31 04:27:54 crc kubenswrapper[4931]: I0131 04:27:54.642593 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmk2" event={"ID":"909dcbcc-c48a-4180-8b14-524e5839eaef","Type":"ContainerDied","Data":"f9a19938b90919054b2b993665631d062605153eaa98909e8859a930c980477f"} Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.653624 4931 generic.go:334] "Generic (PLEG): container finished" podID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerID="baf4221704c0294db56f36ab1def54562647b8e0eded3f83ec2eac2ead8884a7" exitCode=0 Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.654187 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62qpd" event={"ID":"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c","Type":"ContainerDied","Data":"baf4221704c0294db56f36ab1def54562647b8e0eded3f83ec2eac2ead8884a7"} Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.786031 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.871794 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2zf2\" (UniqueName: \"kubernetes.io/projected/909dcbcc-c48a-4180-8b14-524e5839eaef-kube-api-access-x2zf2\") pod \"909dcbcc-c48a-4180-8b14-524e5839eaef\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.871862 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-utilities\") pod \"909dcbcc-c48a-4180-8b14-524e5839eaef\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.871889 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-catalog-content\") pod \"909dcbcc-c48a-4180-8b14-524e5839eaef\" (UID: \"909dcbcc-c48a-4180-8b14-524e5839eaef\") " Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.873187 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-utilities" (OuterVolumeSpecName: "utilities") pod "909dcbcc-c48a-4180-8b14-524e5839eaef" (UID: "909dcbcc-c48a-4180-8b14-524e5839eaef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.885004 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909dcbcc-c48a-4180-8b14-524e5839eaef-kube-api-access-x2zf2" (OuterVolumeSpecName: "kube-api-access-x2zf2") pod "909dcbcc-c48a-4180-8b14-524e5839eaef" (UID: "909dcbcc-c48a-4180-8b14-524e5839eaef"). InnerVolumeSpecName "kube-api-access-x2zf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.926487 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "909dcbcc-c48a-4180-8b14-524e5839eaef" (UID: "909dcbcc-c48a-4180-8b14-524e5839eaef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.974304 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.974356 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909dcbcc-c48a-4180-8b14-524e5839eaef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:55 crc kubenswrapper[4931]: I0131 04:27:55.974373 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2zf2\" (UniqueName: \"kubernetes.io/projected/909dcbcc-c48a-4180-8b14-524e5839eaef-kube-api-access-x2zf2\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.308065 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.381083 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-catalog-content\") pod \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.381239 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmtj6\" (UniqueName: \"kubernetes.io/projected/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-kube-api-access-cmtj6\") pod \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.381312 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-utilities\") pod \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\" (UID: \"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c\") " Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.382646 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-utilities" (OuterVolumeSpecName: "utilities") pod "73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" (UID: "73bfe314-a35b-4a93-b1cb-f1f7a4b2756c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.385842 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-kube-api-access-cmtj6" (OuterVolumeSpecName: "kube-api-access-cmtj6") pod "73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" (UID: "73bfe314-a35b-4a93-b1cb-f1f7a4b2756c"). InnerVolumeSpecName "kube-api-access-cmtj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.442062 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" (UID: "73bfe314-a35b-4a93-b1cb-f1f7a4b2756c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.482507 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.482538 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmtj6\" (UniqueName: \"kubernetes.io/projected/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-kube-api-access-cmtj6\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.482548 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.616321 4931 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.616836 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f" gracePeriod=15 Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.616858 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed" gracePeriod=15 Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.616837 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa" gracePeriod=15 Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.616920 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2" gracePeriod=15 Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.617139 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8" gracePeriod=15 Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.619156 4931 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.619511 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerName="extract-content" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.619606 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerName="extract-content" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.619688 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerName="registry-server" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.619786 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerName="registry-server" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.619858 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.619921 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.619991 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.620063 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.620139 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerName="extract-utilities" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.620218 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerName="extract-utilities" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.620301 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f1f3ea-dc0b-476a-b5ef-80861af5de02" containerName="pruner" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.620367 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f1f3ea-dc0b-476a-b5ef-80861af5de02" containerName="pruner" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.620436 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerName="extract-content" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.620499 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerName="extract-content" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.620565 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.620631 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.620699 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.620800 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.620910 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerName="extract-utilities" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.621000 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerName="extract-utilities" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.621086 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.621154 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.621247 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerName="registry-server" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.621346 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerName="registry-server" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.621444 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerName="registry-server" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.621554 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerName="registry-server" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.621660 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerName="extract-content" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.621773 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerName="extract-content" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.621895 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerName="extract-utilities" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.622020 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerName="extract-utilities" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.622111 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.622179 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.623704 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.623850 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.624171 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" containerName="registry-server" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.624266 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.624336 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f1f3ea-dc0b-476a-b5ef-80861af5de02" containerName="pruner" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.624410 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" containerName="registry-server" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.624491 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac09cac9-6450-4c5c-8b61-5a2c5b60e26b" containerName="registry-server" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.624567 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.624633 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 04:27:56 crc kubenswrapper[4931]: E0131 04:27:56.624924 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.625130 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.625798 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.627015 4931 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.627671 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.631604 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.678152 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62qpd" event={"ID":"73bfe314-a35b-4a93-b1cb-f1f7a4b2756c","Type":"ContainerDied","Data":"3aba2a90ab86b8411d601ee13e005e87ad037dd4258a18147ea332500c544763"} Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.678216 4931 scope.go:117] "RemoveContainer" containerID="baf4221704c0294db56f36ab1def54562647b8e0eded3f83ec2eac2ead8884a7" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.678369 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62qpd" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.680638 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.685128 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.685206 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.685249 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.685298 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.685355 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.685418 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.685508 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.685589 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.688312 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmk2" event={"ID":"909dcbcc-c48a-4180-8b14-524e5839eaef","Type":"ContainerDied","Data":"29ccaccf88b8a358bcf9d2f17553ad3a9bd2c9c1fafca9148bf0cb897810c5b6"} Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.688528 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnmk2" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.690083 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.690428 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.700230 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.700753 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.718015 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.718027 4931 scope.go:117] "RemoveContainer" containerID="7f01103274f324f492d5a649df57801e008b968d84687e8992b7950929d8a91e" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.718292 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.763218 4931 scope.go:117] "RemoveContainer" containerID="53f14c9fe0fce15c8a4024c1b24f60d7b8512056856c96a7048e3a51774325f7" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.777193 4931 scope.go:117] "RemoveContainer" containerID="f9a19938b90919054b2b993665631d062605153eaa98909e8859a930c980477f" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.786333 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.786371 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.786404 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.786438 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.786467 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.786487 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.789453 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.789581 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.789598 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.789594 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.789815 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.789833 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.789858 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.789957 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.790002 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.790040 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.805005 4931 scope.go:117] "RemoveContainer" containerID="ac26326d7a46e655f40b02697945944f2aa33fef304a08bc470d6ded36605bc4" Jan 31 04:27:56 crc kubenswrapper[4931]: I0131 04:27:56.819863 4931 scope.go:117] "RemoveContainer" containerID="d0d56d7005b79638fbef8144e9c58fc4db8b45cfeda8bad506eee895d03780ce" Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.695079 4931 generic.go:334] "Generic (PLEG): container finished" podID="8699b4d0-215a-4648-979d-693eaff910d7" containerID="14b2f7abc0340adfdf48b6061edcf699903db8b5a57b41d654aacf27c514a7a6" exitCode=0 Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.695179 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8699b4d0-215a-4648-979d-693eaff910d7","Type":"ContainerDied","Data":"14b2f7abc0340adfdf48b6061edcf699903db8b5a57b41d654aacf27c514a7a6"} Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.696450 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.696747 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.697065 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.701045 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.702263 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.703061 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2" exitCode=0 Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.703097 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8" exitCode=0 Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.703105 4931 scope.go:117] "RemoveContainer" containerID="e993e921e4c6fda786c8aa5a65af9909de4b7461325fb48b3ec23273987bf6e7" Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.703111 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa" exitCode=0 Jan 31 04:27:57 crc kubenswrapper[4931]: I0131 04:27:57.703126 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed" exitCode=2 Jan 31 04:27:58 crc kubenswrapper[4931]: I0131 04:27:58.725406 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.018583 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.020057 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.020659 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.020751 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.021185 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.021440 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.021647 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.021945 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.022298 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.022584 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.022856 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114754 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8699b4d0-215a-4648-979d-693eaff910d7-kube-api-access\") pod \"8699b4d0-215a-4648-979d-693eaff910d7\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114804 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-kubelet-dir\") pod \"8699b4d0-215a-4648-979d-693eaff910d7\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114838 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114860 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114878 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-var-lock\") pod \"8699b4d0-215a-4648-979d-693eaff910d7\" (UID: \"8699b4d0-215a-4648-979d-693eaff910d7\") " Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114894 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8699b4d0-215a-4648-979d-693eaff910d7" (UID: "8699b4d0-215a-4648-979d-693eaff910d7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114911 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114953 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114970 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114963 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.114984 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-var-lock" (OuterVolumeSpecName: "var-lock") pod "8699b4d0-215a-4648-979d-693eaff910d7" (UID: "8699b4d0-215a-4648-979d-693eaff910d7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.115414 4931 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.115437 4931 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.115446 4931 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.115455 4931 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8699b4d0-215a-4648-979d-693eaff910d7-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.115464 4931 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.121175 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8699b4d0-215a-4648-979d-693eaff910d7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8699b4d0-215a-4648-979d-693eaff910d7" (UID: "8699b4d0-215a-4648-979d-693eaff910d7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.216809 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8699b4d0-215a-4648-979d-693eaff910d7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.740817 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.743428 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f" exitCode=0 Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.743566 4931 scope.go:117] "RemoveContainer" containerID="50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.743595 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.745715 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8699b4d0-215a-4648-979d-693eaff910d7","Type":"ContainerDied","Data":"8048e23ed5c003bb2f66cd5885d9e01c73198b185e49aed0513a252edb8f63ba"} Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.745781 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8048e23ed5c003bb2f66cd5885d9e01c73198b185e49aed0513a252edb8f63ba" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.745840 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.772124 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.772589 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.773680 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.774282 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.774903 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.775601 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.775973 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.776046 4931 scope.go:117] "RemoveContainer" containerID="6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.776402 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.807053 4931 scope.go:117] "RemoveContainer" containerID="2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.827870 4931 scope.go:117] "RemoveContainer" containerID="9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.846894 4931 scope.go:117] "RemoveContainer" containerID="2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.865598 4931 scope.go:117] "RemoveContainer" containerID="acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.893173 4931 scope.go:117] "RemoveContainer" containerID="50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2" Jan 31 04:27:59 crc kubenswrapper[4931]: E0131 04:27:59.897814 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\": container with ID starting with 50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2 not found: ID does not exist" containerID="50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.897863 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2"} err="failed to get container status \"50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\": rpc error: code = NotFound desc = could not find container \"50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2\": container with ID starting with 50217a60f21b7d6167cc679ec9c66015e75a8dee182c9d2a1834b8ef221fffb2 not found: ID does not exist" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.897892 4931 scope.go:117] "RemoveContainer" containerID="6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8" Jan 31 04:27:59 crc kubenswrapper[4931]: E0131 04:27:59.899343 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\": container with ID starting with 6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8 not found: ID does not exist" containerID="6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.899390 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8"} err="failed to get container status \"6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\": rpc error: code = NotFound desc = could not find container \"6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8\": container with ID starting with 6cabc20a1e7a06084feb446c52345ceba9d5f645be59cf06593c799b8debc1b8 not found: ID does not exist" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.899411 4931 scope.go:117] "RemoveContainer" containerID="2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa" Jan 31 04:27:59 crc kubenswrapper[4931]: E0131 04:27:59.900190 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\": container with ID starting with 2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa not found: ID does not exist" containerID="2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.900212 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa"} err="failed to get container status \"2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\": rpc error: code = NotFound desc = could not find container \"2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa\": container with ID starting with 2b8a2490e858d5f62b9eb440a037cba11fd9d143a71c6bdbef3eda7225fb57fa not found: ID does not exist" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.900225 4931 scope.go:117] "RemoveContainer" containerID="9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed" Jan 31 04:27:59 crc kubenswrapper[4931]: E0131 04:27:59.900883 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\": container with ID starting with 9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed not found: ID does not exist" containerID="9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.900906 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed"} err="failed to get container status \"9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\": rpc error: code = NotFound desc = could not find container \"9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed\": container with ID starting with 9e6f6ef457426ca4f77fdcff064ac9309db8fd36c3fb7c7833d079b6e80237ed not found: ID does not exist" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.900923 4931 scope.go:117] "RemoveContainer" containerID="2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f" Jan 31 04:27:59 crc kubenswrapper[4931]: E0131 04:27:59.901443 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\": container with ID starting with 2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f not found: ID does not exist" containerID="2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.901470 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f"} err="failed to get container status \"2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\": rpc error: code = NotFound desc = could not find container \"2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f\": container with ID starting with 2f59a30f2f7a298a34eb69da24b88e9c066ab925521ec7029ecba29d2d18c99f not found: ID does not exist" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.901488 4931 scope.go:117] "RemoveContainer" containerID="acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca" Jan 31 04:27:59 crc kubenswrapper[4931]: E0131 04:27:59.901810 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\": container with ID starting with acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca not found: ID does not exist" containerID="acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.901830 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca"} err="failed to get container status \"acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\": rpc error: code = NotFound desc = could not find container \"acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca\": container with ID starting with acdfd5f0ea48f05a4bd1158f0bd1360423c06cc249177200caf1a4b34b411bca not found: ID does not exist" Jan 31 04:27:59 crc kubenswrapper[4931]: I0131 04:27:59.906303 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.285789 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" containerName="oauth-openshift" containerID="cri-o://4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8" gracePeriod=15 Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.744304 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.745275 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.745808 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.746787 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.747205 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.753594 4931 generic.go:334] "Generic (PLEG): container finished" podID="34be6968-eb64-46a3-9e5f-f5568d764d8d" containerID="4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8" exitCode=0 Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.753637 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" event={"ID":"34be6968-eb64-46a3-9e5f-f5568d764d8d","Type":"ContainerDied","Data":"4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8"} Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.753670 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" event={"ID":"34be6968-eb64-46a3-9e5f-f5568d764d8d","Type":"ContainerDied","Data":"4e9b25fa9b8ca9a911b8e5cdd201f9d3c1f57422ba4197604fa827a715e6aaef"} Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.753675 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.753690 4931 scope.go:117] "RemoveContainer" containerID="4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.754976 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.755337 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.755859 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.756200 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.770782 4931 scope.go:117] "RemoveContainer" containerID="4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8" Jan 31 04:28:00 crc kubenswrapper[4931]: E0131 04:28:00.771281 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8\": container with ID starting with 4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8 not found: ID does not exist" containerID="4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.771322 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8"} err="failed to get container status \"4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8\": rpc error: code = NotFound desc = could not find container \"4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8\": container with ID starting with 4425b37dd5b413d701ecb38986855468483fd38d221440854155af1f07731dd8 not found: ID does not exist" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841173 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-dir\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841247 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-error\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841283 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-serving-cert\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841317 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-cliconfig\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841308 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841362 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-service-ca\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841409 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-provider-selection\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841447 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-login\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841472 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-session\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841510 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-idp-0-file-data\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841549 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-ocp-branding-template\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841577 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-policies\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841617 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-router-certs\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841646 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-trusted-ca-bundle\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841688 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtxpr\" (UniqueName: \"kubernetes.io/projected/34be6968-eb64-46a3-9e5f-f5568d764d8d-kube-api-access-vtxpr\") pod \"34be6968-eb64-46a3-9e5f-f5568d764d8d\" (UID: \"34be6968-eb64-46a3-9e5f-f5568d764d8d\") " Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.841973 4931 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.842838 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.842863 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.843174 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.843229 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.849165 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.849528 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34be6968-eb64-46a3-9e5f-f5568d764d8d-kube-api-access-vtxpr" (OuterVolumeSpecName: "kube-api-access-vtxpr") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "kube-api-access-vtxpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.849626 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.849864 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.850331 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.853889 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.854128 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.854316 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.854559 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "34be6968-eb64-46a3-9e5f-f5568d764d8d" (UID: "34be6968-eb64-46a3-9e5f-f5568d764d8d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943264 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943303 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943315 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtxpr\" (UniqueName: \"kubernetes.io/projected/34be6968-eb64-46a3-9e5f-f5568d764d8d-kube-api-access-vtxpr\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943326 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943335 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943344 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943353 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943363 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943374 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943382 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943392 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943401 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34be6968-eb64-46a3-9e5f-f5568d764d8d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:00 crc kubenswrapper[4931]: I0131 04:28:00.943411 4931 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34be6968-eb64-46a3-9e5f-f5568d764d8d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.067255 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.067554 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.067821 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.068041 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:01 crc kubenswrapper[4931]: E0131 04:28:01.669461 4931 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.669966 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:28:01 crc kubenswrapper[4931]: W0131 04:28:01.704512 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-2f236d5a6a7f46b23dac5c9243fd9caa245fdb8a647d250c0b54d71360e01949 WatchSource:0}: Error finding container 2f236d5a6a7f46b23dac5c9243fd9caa245fdb8a647d250c0b54d71360e01949: Status 404 returned error can't find the container with id 2f236d5a6a7f46b23dac5c9243fd9caa245fdb8a647d250c0b54d71360e01949 Jan 31 04:28:01 crc kubenswrapper[4931]: E0131 04:28:01.707072 4931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb65b459fd71b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 04:28:01.706276635 +0000 UTC m=+240.515505519,LastTimestamp:2026-01-31 04:28:01.706276635 +0000 UTC m=+240.515505519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.762545 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2f236d5a6a7f46b23dac5c9243fd9caa245fdb8a647d250c0b54d71360e01949"} Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.899312 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.900484 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.900979 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:01 crc kubenswrapper[4931]: I0131 04:28:01.901256 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: E0131 04:28:02.250223 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: E0131 04:28:02.250892 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: E0131 04:28:02.251097 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: E0131 04:28:02.251408 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: E0131 04:28:02.251619 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: I0131 04:28:02.251643 4931 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 04:28:02 crc kubenswrapper[4931]: E0131 04:28:02.251888 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="200ms" Jan 31 04:28:02 crc kubenswrapper[4931]: E0131 04:28:02.452960 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="400ms" Jan 31 04:28:02 crc kubenswrapper[4931]: I0131 04:28:02.772117 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9b05631cb0f8264564b39d7b05a8216a30f33c0842c1d9573015f29a1e5aab1e"} Jan 31 04:28:02 crc kubenswrapper[4931]: I0131 04:28:02.772708 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: I0131 04:28:02.773004 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: E0131 04:28:02.773024 4931 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:28:02 crc kubenswrapper[4931]: I0131 04:28:02.773297 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: I0131 04:28:02.773605 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:02 crc kubenswrapper[4931]: E0131 04:28:02.854097 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="800ms" Jan 31 04:28:03 crc kubenswrapper[4931]: E0131 04:28:03.655476 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="1.6s" Jan 31 04:28:03 crc kubenswrapper[4931]: E0131 04:28:03.779382 4931 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:28:04 crc kubenswrapper[4931]: E0131 04:28:04.552385 4931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb65b459fd71b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 04:28:01.706276635 +0000 UTC m=+240.515505519,LastTimestamp:2026-01-31 04:28:01.706276635 +0000 UTC m=+240.515505519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 04:28:05 crc kubenswrapper[4931]: E0131 04:28:05.257122 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="3.2s" Jan 31 04:28:08 crc kubenswrapper[4931]: E0131 04:28:08.457993 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="6.4s" Jan 31 04:28:10 crc kubenswrapper[4931]: I0131 04:28:10.822965 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 04:28:10 crc kubenswrapper[4931]: I0131 04:28:10.823345 4931 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488" exitCode=1 Jan 31 04:28:10 crc kubenswrapper[4931]: I0131 04:28:10.823375 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488"} Jan 31 04:28:10 crc kubenswrapper[4931]: I0131 04:28:10.823878 4931 scope.go:117] "RemoveContainer" containerID="68462a94f02c58317ec67e0d085c41fede28494cc877def93eb0724c31b5e488" Jan 31 04:28:10 crc kubenswrapper[4931]: I0131 04:28:10.824240 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:10 crc kubenswrapper[4931]: I0131 04:28:10.825018 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:10 crc kubenswrapper[4931]: I0131 04:28:10.825806 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:10 crc kubenswrapper[4931]: I0131 04:28:10.826245 4931 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:10 crc kubenswrapper[4931]: I0131 04:28:10.826590 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.896355 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.900448 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.900788 4931 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.901267 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.902176 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.902408 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.902772 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.903175 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.903628 4931 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.904020 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.904356 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.912946 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.912974 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:11 crc kubenswrapper[4931]: E0131 04:28:11.913504 4931 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:11 crc kubenswrapper[4931]: I0131 04:28:11.914333 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:11 crc kubenswrapper[4931]: W0131 04:28:11.952676 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ff7290039ded74d1c37a69573ecdc0c8caa31d48a655ae3719470c859d790cf2 WatchSource:0}: Error finding container ff7290039ded74d1c37a69573ecdc0c8caa31d48a655ae3719470c859d790cf2: Status 404 returned error can't find the container with id ff7290039ded74d1c37a69573ecdc0c8caa31d48a655ae3719470c859d790cf2 Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.843290 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.843780 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5d9222ee927402b4985b18f11a70173681dc7a9c72b0758fb5c7f941369d2e9a"} Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.845019 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.845378 4931 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.845769 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.846208 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.846684 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.847371 4931 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f44e77cbbb29f8ea39fc16a13ce98c96669ce4ccca4ae5f9530a15ea463e468f" exitCode=0 Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.847412 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f44e77cbbb29f8ea39fc16a13ce98c96669ce4ccca4ae5f9530a15ea463e468f"} Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.847444 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ff7290039ded74d1c37a69573ecdc0c8caa31d48a655ae3719470c859d790cf2"} Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.847713 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.847750 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:12 crc kubenswrapper[4931]: E0131 04:28:12.848072 4931 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.848459 4931 status_manager.go:851] "Failed to get status for pod" podUID="8699b4d0-215a-4648-979d-693eaff910d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.848932 4931 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.849331 4931 status_manager.go:851] "Failed to get status for pod" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" pod="openshift-marketplace/community-operators-nnmk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-nnmk2\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.851254 4931 status_manager.go:851] "Failed to get status for pod" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" pod="openshift-marketplace/certified-operators-62qpd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-62qpd\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:12 crc kubenswrapper[4931]: I0131 04:28:12.851758 4931 status_manager.go:851] "Failed to get status for pod" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" pod="openshift-authentication/oauth-openshift-558db77b4-bmrsg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-bmrsg\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 31 04:28:13 crc kubenswrapper[4931]: I0131 04:28:13.867459 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cac429bacece17d71630804af8e4f238e6f47698eec68c76da01a6fb7c45cb6a"} Jan 31 04:28:13 crc kubenswrapper[4931]: I0131 04:28:13.867500 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"83ab14b9bdea931b6451d918063a159614cd61c682fbb445db959336978e0c5d"} Jan 31 04:28:14 crc kubenswrapper[4931]: I0131 04:28:14.890682 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e07b62e267cd64142ad0b4100903a59503e9ddc9828d6c99a5a866a501c4e0ca"} Jan 31 04:28:14 crc kubenswrapper[4931]: I0131 04:28:14.891017 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c5c451123093bda39b78bb0930379ac03537ef4b55956892349b98469a305bd5"} Jan 31 04:28:14 crc kubenswrapper[4931]: I0131 04:28:14.891030 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b400077d5f676aeb23e93a83ea06ed13d4c4db2c3331237714526de327acc9b"} Jan 31 04:28:14 crc kubenswrapper[4931]: I0131 04:28:14.891293 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:14 crc kubenswrapper[4931]: I0131 04:28:14.891308 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:14 crc kubenswrapper[4931]: I0131 04:28:14.891571 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:16 crc kubenswrapper[4931]: I0131 04:28:16.915017 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:16 crc kubenswrapper[4931]: I0131 04:28:16.915358 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:16 crc kubenswrapper[4931]: I0131 04:28:16.920599 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:19 crc kubenswrapper[4931]: I0131 04:28:19.775424 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:28:19 crc kubenswrapper[4931]: I0131 04:28:19.906832 4931 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:20 crc kubenswrapper[4931]: I0131 04:28:20.662672 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:28:20 crc kubenswrapper[4931]: I0131 04:28:20.671272 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:28:20 crc kubenswrapper[4931]: I0131 04:28:20.930949 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:20 crc kubenswrapper[4931]: I0131 04:28:20.931002 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:20 crc kubenswrapper[4931]: I0131 04:28:20.938914 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:21 crc kubenswrapper[4931]: I0131 04:28:21.913654 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a375eac0-5a7e-48d0-8d4c-91af6b4157f4" Jan 31 04:28:21 crc kubenswrapper[4931]: I0131 04:28:21.939315 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:21 crc kubenswrapper[4931]: I0131 04:28:21.939387 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:21 crc kubenswrapper[4931]: I0131 04:28:21.942693 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a375eac0-5a7e-48d0-8d4c-91af6b4157f4" Jan 31 04:28:29 crc kubenswrapper[4931]: I0131 04:28:29.325176 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 04:28:29 crc kubenswrapper[4931]: I0131 04:28:29.369716 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 04:28:29 crc kubenswrapper[4931]: I0131 04:28:29.782277 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:28:30 crc kubenswrapper[4931]: I0131 04:28:30.518761 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 04:28:30 crc kubenswrapper[4931]: I0131 04:28:30.890175 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 04:28:31 crc kubenswrapper[4931]: I0131 04:28:31.005330 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 04:28:31 crc kubenswrapper[4931]: I0131 04:28:31.088652 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 04:28:31 crc kubenswrapper[4931]: I0131 04:28:31.118048 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 04:28:31 crc kubenswrapper[4931]: I0131 04:28:31.432269 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 04:28:31 crc kubenswrapper[4931]: I0131 04:28:31.436850 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 04:28:31 crc kubenswrapper[4931]: I0131 04:28:31.633932 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 04:28:31 crc kubenswrapper[4931]: I0131 04:28:31.768948 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 04:28:31 crc kubenswrapper[4931]: I0131 04:28:31.944809 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.150412 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.172117 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.223100 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.248489 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.375114 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.583087 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.717310 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.797516 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.811133 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.869262 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.952050 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 04:28:32 crc kubenswrapper[4931]: I0131 04:28:32.989846 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.013805 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.184065 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.442235 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.508153 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.592307 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.636211 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.638070 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.733570 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.766611 4931 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.775702 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bmrsg","openshift-marketplace/certified-operators-62qpd","openshift-marketplace/community-operators-nnmk2","openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.775903 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84"] Jan 31 04:28:33 crc kubenswrapper[4931]: E0131 04:28:33.776292 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8699b4d0-215a-4648-979d-693eaff910d7" containerName="installer" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.776326 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="8699b4d0-215a-4648-979d-693eaff910d7" containerName="installer" Jan 31 04:28:33 crc kubenswrapper[4931]: E0131 04:28:33.776345 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" containerName="oauth-openshift" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.776360 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" containerName="oauth-openshift" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.776419 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.776547 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="638aa0b1-4b39-4fe0-b136-0a9b2a53edde" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.776569 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="8699b4d0-215a-4648-979d-693eaff910d7" containerName="installer" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.776598 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" containerName="oauth-openshift" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.777414 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.781132 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.781926 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.782228 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.782535 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.782782 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.782800 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.782916 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.783267 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.783340 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.783449 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.783283 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.785097 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.785395 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.787838 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.788193 4931 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.789703 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-template-error\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.789819 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.789881 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790014 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bxtv\" (UniqueName: \"kubernetes.io/projected/77bf60f1-82f4-411d-80f7-e53834ddf315-kube-api-access-5bxtv\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790093 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790123 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790162 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-audit-policies\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790210 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-template-login\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790240 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790286 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-session\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790330 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790364 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790900 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77bf60f1-82f4-411d-80f7-e53834ddf315-audit-dir\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.790972 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.818011 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.824114 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.824072189 podStartE2EDuration="14.824072189s" podCreationTimestamp="2026-01-31 04:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:28:33.811216363 +0000 UTC m=+272.620445277" watchObservedRunningTime="2026-01-31 04:28:33.824072189 +0000 UTC m=+272.633301113" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.825111 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.825921 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.831143 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.842935 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.844326 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892362 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-template-error\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892411 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892434 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892467 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bxtv\" (UniqueName: \"kubernetes.io/projected/77bf60f1-82f4-411d-80f7-e53834ddf315-kube-api-access-5bxtv\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892499 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892514 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892535 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-audit-policies\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892560 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-template-login\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892577 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892603 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-session\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892629 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892648 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892666 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77bf60f1-82f4-411d-80f7-e53834ddf315-audit-dir\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.892689 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.894953 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.895227 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-audit-policies\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.895342 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.895368 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77bf60f1-82f4-411d-80f7-e53834ddf315-audit-dir\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.895491 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.901314 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-session\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.901358 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-template-error\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.901586 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.904250 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34be6968-eb64-46a3-9e5f-f5568d764d8d" path="/var/lib/kubelet/pods/34be6968-eb64-46a3-9e5f-f5568d764d8d/volumes" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.904765 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.904916 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-template-login\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.905158 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73bfe314-a35b-4a93-b1cb-f1f7a4b2756c" path="/var/lib/kubelet/pods/73bfe314-a35b-4a93-b1cb-f1f7a4b2756c/volumes" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.905475 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.905840 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="909dcbcc-c48a-4180-8b14-524e5839eaef" path="/var/lib/kubelet/pods/909dcbcc-c48a-4180-8b14-524e5839eaef/volumes" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.907026 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.909754 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/77bf60f1-82f4-411d-80f7-e53834ddf315-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.917577 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bxtv\" (UniqueName: \"kubernetes.io/projected/77bf60f1-82f4-411d-80f7-e53834ddf315-kube-api-access-5bxtv\") pod \"oauth-openshift-7f5b9fd94b-ltt84\" (UID: \"77bf60f1-82f4-411d-80f7-e53834ddf315\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.953623 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.975573 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 04:28:33 crc kubenswrapper[4931]: I0131 04:28:33.985270 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.067574 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.069996 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.113940 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.139669 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.233684 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.254909 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.362144 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.409080 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.448905 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.450687 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.539958 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.608336 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.654926 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.742967 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.773666 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 04:28:34 crc kubenswrapper[4931]: I0131 04:28:34.803140 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.076410 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.110701 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.115249 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.123052 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.143516 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.217388 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.234510 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.287688 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.332611 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.483649 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.518433 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.521597 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.629653 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.746074 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.896168 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 04:28:35 crc kubenswrapper[4931]: I0131 04:28:35.915957 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.003199 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.020768 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.056073 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.056838 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.099281 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.139896 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.288309 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.453040 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.479950 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.507786 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.573861 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.648319 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.870901 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.896808 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.899588 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 04:28:36 crc kubenswrapper[4931]: I0131 04:28:36.914408 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.143363 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.149969 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.334357 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 04:28:37 crc kubenswrapper[4931]: E0131 04:28:37.376125 4931 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 31 04:28:37 crc kubenswrapper[4931]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7f5b9fd94b-ltt84_openshift-authentication_77bf60f1-82f4-411d-80f7-e53834ddf315_0(1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e): error adding pod openshift-authentication_oauth-openshift-7f5b9fd94b-ltt84 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e" Netns:"/var/run/netns/ed110bad-2bfd-4632-b5fd-56e8ed7c2fea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7f5b9fd94b-ltt84;K8S_POD_INFRA_CONTAINER_ID=1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e;K8S_POD_UID=77bf60f1-82f4-411d-80f7-e53834ddf315" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84] networking: Multus: [openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84/77bf60f1-82f4-411d-80f7-e53834ddf315]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7f5b9fd94b-ltt84 in out of cluster comm: pod "oauth-openshift-7f5b9fd94b-ltt84" not found Jan 31 04:28:37 crc kubenswrapper[4931]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 04:28:37 crc kubenswrapper[4931]: > Jan 31 04:28:37 crc kubenswrapper[4931]: E0131 04:28:37.376214 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 31 04:28:37 crc kubenswrapper[4931]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7f5b9fd94b-ltt84_openshift-authentication_77bf60f1-82f4-411d-80f7-e53834ddf315_0(1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e): error adding pod openshift-authentication_oauth-openshift-7f5b9fd94b-ltt84 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e" Netns:"/var/run/netns/ed110bad-2bfd-4632-b5fd-56e8ed7c2fea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7f5b9fd94b-ltt84;K8S_POD_INFRA_CONTAINER_ID=1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e;K8S_POD_UID=77bf60f1-82f4-411d-80f7-e53834ddf315" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84] networking: Multus: [openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84/77bf60f1-82f4-411d-80f7-e53834ddf315]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7f5b9fd94b-ltt84 in out of cluster comm: pod "oauth-openshift-7f5b9fd94b-ltt84" not found Jan 31 04:28:37 crc kubenswrapper[4931]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 04:28:37 crc kubenswrapper[4931]: > pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:37 crc kubenswrapper[4931]: E0131 04:28:37.376244 4931 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 31 04:28:37 crc kubenswrapper[4931]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7f5b9fd94b-ltt84_openshift-authentication_77bf60f1-82f4-411d-80f7-e53834ddf315_0(1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e): error adding pod openshift-authentication_oauth-openshift-7f5b9fd94b-ltt84 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e" Netns:"/var/run/netns/ed110bad-2bfd-4632-b5fd-56e8ed7c2fea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7f5b9fd94b-ltt84;K8S_POD_INFRA_CONTAINER_ID=1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e;K8S_POD_UID=77bf60f1-82f4-411d-80f7-e53834ddf315" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84] networking: Multus: [openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84/77bf60f1-82f4-411d-80f7-e53834ddf315]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7f5b9fd94b-ltt84 in out of cluster comm: pod "oauth-openshift-7f5b9fd94b-ltt84" not found Jan 31 04:28:37 crc kubenswrapper[4931]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 04:28:37 crc kubenswrapper[4931]: > pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:37 crc kubenswrapper[4931]: E0131 04:28:37.376310 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-7f5b9fd94b-ltt84_openshift-authentication(77bf60f1-82f4-411d-80f7-e53834ddf315)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-7f5b9fd94b-ltt84_openshift-authentication(77bf60f1-82f4-411d-80f7-e53834ddf315)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7f5b9fd94b-ltt84_openshift-authentication_77bf60f1-82f4-411d-80f7-e53834ddf315_0(1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e): error adding pod openshift-authentication_oauth-openshift-7f5b9fd94b-ltt84 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e\\\" Netns:\\\"/var/run/netns/ed110bad-2bfd-4632-b5fd-56e8ed7c2fea\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7f5b9fd94b-ltt84;K8S_POD_INFRA_CONTAINER_ID=1617151dad4c7aa1722ec817b3df9b325615bde8d41c7f0095fa1b43c703b42e;K8S_POD_UID=77bf60f1-82f4-411d-80f7-e53834ddf315\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84] networking: Multus: [openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84/77bf60f1-82f4-411d-80f7-e53834ddf315]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7f5b9fd94b-ltt84 in out of cluster comm: pod \\\"oauth-openshift-7f5b9fd94b-ltt84\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" podUID="77bf60f1-82f4-411d-80f7-e53834ddf315" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.601532 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.690595 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.748382 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.825914 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.901661 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.969889 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 04:28:37 crc kubenswrapper[4931]: I0131 04:28:37.978608 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.036314 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.094607 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.121676 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.130967 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.194705 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.214293 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.255616 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.268996 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.292621 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.541282 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.620382 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.810077 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.873137 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.924265 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.990110 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 04:28:38 crc kubenswrapper[4931]: I0131 04:28:38.999229 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.017354 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.112639 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.140647 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.227945 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.247481 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.250467 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.265349 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.343900 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.401412 4931 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.499537 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.516570 4931 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.533640 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.566408 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.625537 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.729516 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.851011 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.898540 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 04:28:39 crc kubenswrapper[4931]: I0131 04:28:39.942421 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.061846 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.138249 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.141213 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.181115 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.208255 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.258826 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.308619 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.342164 4931 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.376524 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.473756 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.548667 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.552222 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.651868 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.675554 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.686989 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.780462 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.788916 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.810900 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 04:28:40 crc kubenswrapper[4931]: I0131 04:28:40.886969 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.069506 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.096244 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.106897 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.228068 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.344802 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.359688 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.364120 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.392687 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.461542 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.475170 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.501496 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.522544 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.529808 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.662613 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.698515 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.737493 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.775417 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 04:28:41 crc kubenswrapper[4931]: I0131 04:28:41.854187 4931 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.018553 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.036215 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.086518 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.090852 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.094027 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.108589 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.128152 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.139012 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.345136 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.387836 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.408252 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.428036 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.618536 4931 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.618830 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9b05631cb0f8264564b39d7b05a8216a30f33c0842c1d9573015f29a1e5aab1e" gracePeriod=5 Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.647678 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.668383 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 04:28:42 crc kubenswrapper[4931]: I0131 04:28:42.980600 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.181597 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.196939 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.255297 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.415963 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.441256 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.441577 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.490977 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.502516 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.692284 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.881123 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.881746 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.890495 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 04:28:43 crc kubenswrapper[4931]: I0131 04:28:43.990461 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.009145 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.025531 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.112532 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.291030 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.294106 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.305254 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.337783 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.560419 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.590841 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.721149 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.731084 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.831244 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 04:28:44 crc kubenswrapper[4931]: I0131 04:28:44.872184 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 04:28:45 crc kubenswrapper[4931]: I0131 04:28:45.069469 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 04:28:45 crc kubenswrapper[4931]: I0131 04:28:45.120435 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 04:28:45 crc kubenswrapper[4931]: I0131 04:28:45.302805 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 04:28:45 crc kubenswrapper[4931]: I0131 04:28:45.317752 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 04:28:45 crc kubenswrapper[4931]: I0131 04:28:45.427223 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 04:28:45 crc kubenswrapper[4931]: I0131 04:28:45.570180 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 04:28:45 crc kubenswrapper[4931]: I0131 04:28:45.600138 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 04:28:45 crc kubenswrapper[4931]: I0131 04:28:45.844964 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 04:28:47 crc kubenswrapper[4931]: I0131 04:28:47.896800 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:47 crc kubenswrapper[4931]: I0131 04:28:47.897834 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.140597 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.140672 4931 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9b05631cb0f8264564b39d7b05a8216a30f33c0842c1d9573015f29a1e5aab1e" exitCode=137 Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.230326 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.230911 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.317275 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.317382 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.317467 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.317524 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.317576 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.318048 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.318106 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.318088 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.318199 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.336998 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.367582 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84"] Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.419770 4931 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.419832 4931 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.419853 4931 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.419874 4931 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:48 crc kubenswrapper[4931]: I0131 04:28:48.419891 4931 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 04:28:49 crc kubenswrapper[4931]: I0131 04:28:49.150490 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" event={"ID":"77bf60f1-82f4-411d-80f7-e53834ddf315","Type":"ContainerStarted","Data":"76b250a86569aab04bbc91d626815738741950ee7658cdddee570eaec96e5d7e"} Jan 31 04:28:49 crc kubenswrapper[4931]: I0131 04:28:49.151031 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" event={"ID":"77bf60f1-82f4-411d-80f7-e53834ddf315","Type":"ContainerStarted","Data":"e8eddfa8953d78f9ec8881d9fd959ac7ce2af8675bf987e8761ee3d195ec6cb5"} Jan 31 04:28:49 crc kubenswrapper[4931]: I0131 04:28:49.151053 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:49 crc kubenswrapper[4931]: I0131 04:28:49.153810 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 04:28:49 crc kubenswrapper[4931]: I0131 04:28:49.153924 4931 scope.go:117] "RemoveContainer" containerID="9b05631cb0f8264564b39d7b05a8216a30f33c0842c1d9573015f29a1e5aab1e" Jan 31 04:28:49 crc kubenswrapper[4931]: I0131 04:28:49.154111 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:28:49 crc kubenswrapper[4931]: I0131 04:28:49.202058 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" podStartSLOduration=74.202024306 podStartE2EDuration="1m14.202024306s" podCreationTimestamp="2026-01-31 04:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:28:49.190818164 +0000 UTC m=+288.000047088" watchObservedRunningTime="2026-01-31 04:28:49.202024306 +0000 UTC m=+288.011253220" Jan 31 04:28:49 crc kubenswrapper[4931]: I0131 04:28:49.248202 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-ltt84" Jan 31 04:28:49 crc kubenswrapper[4931]: I0131 04:28:49.910295 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 04:28:55 crc kubenswrapper[4931]: I0131 04:28:55.841673 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 04:28:57 crc kubenswrapper[4931]: I0131 04:28:57.321019 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 04:28:58 crc kubenswrapper[4931]: I0131 04:28:58.391539 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 04:28:59 crc kubenswrapper[4931]: I0131 04:28:59.355811 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 04:29:00 crc kubenswrapper[4931]: I0131 04:29:00.121670 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 04:29:00 crc kubenswrapper[4931]: I0131 04:29:00.947893 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 04:29:01 crc kubenswrapper[4931]: I0131 04:29:01.233709 4931 generic.go:334] "Generic (PLEG): container finished" podID="b94cc359-b91d-4058-9b99-daee5cb58497" containerID="2214ded12636a10761e8ebbc7d9272c977396c766915fdceeaefef57bd936a80" exitCode=0 Jan 31 04:29:01 crc kubenswrapper[4931]: I0131 04:29:01.233764 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" event={"ID":"b94cc359-b91d-4058-9b99-daee5cb58497","Type":"ContainerDied","Data":"2214ded12636a10761e8ebbc7d9272c977396c766915fdceeaefef57bd936a80"} Jan 31 04:29:01 crc kubenswrapper[4931]: I0131 04:29:01.234543 4931 scope.go:117] "RemoveContainer" containerID="2214ded12636a10761e8ebbc7d9272c977396c766915fdceeaefef57bd936a80" Jan 31 04:29:01 crc kubenswrapper[4931]: I0131 04:29:01.684400 4931 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 04:29:02 crc kubenswrapper[4931]: I0131 04:29:02.241995 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" event={"ID":"b94cc359-b91d-4058-9b99-daee5cb58497","Type":"ContainerStarted","Data":"4c66077881cec64331ed6c4fa597771bef3329e8c8eb6639da57807e9f0b3b4c"} Jan 31 04:29:02 crc kubenswrapper[4931]: I0131 04:29:02.243129 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:29:02 crc kubenswrapper[4931]: I0131 04:29:02.248418 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:29:03 crc kubenswrapper[4931]: I0131 04:29:03.318411 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 04:29:05 crc kubenswrapper[4931]: I0131 04:29:05.979786 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 04:29:06 crc kubenswrapper[4931]: I0131 04:29:06.056146 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 04:29:08 crc kubenswrapper[4931]: I0131 04:29:08.938336 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-84x2t"] Jan 31 04:29:08 crc kubenswrapper[4931]: I0131 04:29:08.938606 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" podUID="70ea039d-762a-4d09-a1a3-45d32be0e754" containerName="controller-manager" containerID="cri-o://a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f" gracePeriod=30 Jan 31 04:29:08 crc kubenswrapper[4931]: I0131 04:29:08.975076 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.034050 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv"] Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.034293 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" podUID="516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" containerName="route-controller-manager" containerID="cri-o://4d67bd9342c84fb1ad92565804f20464bd57f2ca9134459d1dd179e2d44d5fed" gracePeriod=30 Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.273422 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.299525 4931 generic.go:334] "Generic (PLEG): container finished" podID="70ea039d-762a-4d09-a1a3-45d32be0e754" containerID="a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f" exitCode=0 Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.299587 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" event={"ID":"70ea039d-762a-4d09-a1a3-45d32be0e754","Type":"ContainerDied","Data":"a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f"} Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.299620 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" event={"ID":"70ea039d-762a-4d09-a1a3-45d32be0e754","Type":"ContainerDied","Data":"5939c6696985b2711bf2f7863239bc74efa5aa10d79d84da6424d41ed683d537"} Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.299642 4931 scope.go:117] "RemoveContainer" containerID="a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.299695 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-84x2t" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.301709 4931 generic.go:334] "Generic (PLEG): container finished" podID="516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" containerID="4d67bd9342c84fb1ad92565804f20464bd57f2ca9134459d1dd179e2d44d5fed" exitCode=0 Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.301752 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" event={"ID":"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd","Type":"ContainerDied","Data":"4d67bd9342c84fb1ad92565804f20464bd57f2ca9134459d1dd179e2d44d5fed"} Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.315703 4931 scope.go:117] "RemoveContainer" containerID="a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f" Jan 31 04:29:09 crc kubenswrapper[4931]: E0131 04:29:09.316258 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f\": container with ID starting with a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f not found: ID does not exist" containerID="a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.316304 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f"} err="failed to get container status \"a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f\": rpc error: code = NotFound desc = could not find container \"a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f\": container with ID starting with a3cb5631fa92d8841dd37c10d90d4aea4419e49937c7060b1cddcd49cdb61d6f not found: ID does not exist" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.363511 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.453471 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ea039d-762a-4d09-a1a3-45d32be0e754-serving-cert\") pod \"70ea039d-762a-4d09-a1a3-45d32be0e754\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.453512 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2r6r\" (UniqueName: \"kubernetes.io/projected/70ea039d-762a-4d09-a1a3-45d32be0e754-kube-api-access-b2r6r\") pod \"70ea039d-762a-4d09-a1a3-45d32be0e754\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.453560 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-proxy-ca-bundles\") pod \"70ea039d-762a-4d09-a1a3-45d32be0e754\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.453579 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-config\") pod \"70ea039d-762a-4d09-a1a3-45d32be0e754\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.453642 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-client-ca\") pod \"70ea039d-762a-4d09-a1a3-45d32be0e754\" (UID: \"70ea039d-762a-4d09-a1a3-45d32be0e754\") " Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.454443 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-client-ca" (OuterVolumeSpecName: "client-ca") pod "70ea039d-762a-4d09-a1a3-45d32be0e754" (UID: "70ea039d-762a-4d09-a1a3-45d32be0e754"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.454453 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "70ea039d-762a-4d09-a1a3-45d32be0e754" (UID: "70ea039d-762a-4d09-a1a3-45d32be0e754"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.454609 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-config" (OuterVolumeSpecName: "config") pod "70ea039d-762a-4d09-a1a3-45d32be0e754" (UID: "70ea039d-762a-4d09-a1a3-45d32be0e754"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.458833 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ea039d-762a-4d09-a1a3-45d32be0e754-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70ea039d-762a-4d09-a1a3-45d32be0e754" (UID: "70ea039d-762a-4d09-a1a3-45d32be0e754"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.459063 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ea039d-762a-4d09-a1a3-45d32be0e754-kube-api-access-b2r6r" (OuterVolumeSpecName: "kube-api-access-b2r6r") pod "70ea039d-762a-4d09-a1a3-45d32be0e754" (UID: "70ea039d-762a-4d09-a1a3-45d32be0e754"). InnerVolumeSpecName "kube-api-access-b2r6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.555022 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-client-ca\") pod \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.555124 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-config\") pod \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.555170 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs42v\" (UniqueName: \"kubernetes.io/projected/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-kube-api-access-vs42v\") pod \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.555206 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-serving-cert\") pod \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\" (UID: \"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd\") " Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.556284 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ea039d-762a-4d09-a1a3-45d32be0e754-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.556306 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2r6r\" (UniqueName: \"kubernetes.io/projected/70ea039d-762a-4d09-a1a3-45d32be0e754-kube-api-access-b2r6r\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.556316 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.556325 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.556333 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70ea039d-762a-4d09-a1a3-45d32be0e754-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.556712 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-client-ca" (OuterVolumeSpecName: "client-ca") pod "516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" (UID: "516aa61e-41f3-4866-ad1c-f4d87fbb9ebd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.556789 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-config" (OuterVolumeSpecName: "config") pod "516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" (UID: "516aa61e-41f3-4866-ad1c-f4d87fbb9ebd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.561030 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-kube-api-access-vs42v" (OuterVolumeSpecName: "kube-api-access-vs42v") pod "516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" (UID: "516aa61e-41f3-4866-ad1c-f4d87fbb9ebd"). InnerVolumeSpecName "kube-api-access-vs42v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.561571 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" (UID: "516aa61e-41f3-4866-ad1c-f4d87fbb9ebd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.631608 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-84x2t"] Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.635011 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-84x2t"] Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.657905 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.657936 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs42v\" (UniqueName: \"kubernetes.io/projected/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-kube-api-access-vs42v\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.657946 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.657956 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.875280 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 04:29:09 crc kubenswrapper[4931]: I0131 04:29:09.903303 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ea039d-762a-4d09-a1a3-45d32be0e754" path="/var/lib/kubelet/pods/70ea039d-762a-4d09-a1a3-45d32be0e754/volumes" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.217806 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.227525 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x"] Jan 31 04:29:10 crc kubenswrapper[4931]: E0131 04:29:10.227821 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" containerName="route-controller-manager" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.227844 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" containerName="route-controller-manager" Jan 31 04:29:10 crc kubenswrapper[4931]: E0131 04:29:10.227857 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea039d-762a-4d09-a1a3-45d32be0e754" containerName="controller-manager" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.227865 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea039d-762a-4d09-a1a3-45d32be0e754" containerName="controller-manager" Jan 31 04:29:10 crc kubenswrapper[4931]: E0131 04:29:10.227889 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.227897 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.228009 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" containerName="route-controller-manager" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.228024 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ea039d-762a-4d09-a1a3-45d32be0e754" containerName="controller-manager" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.228039 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.228521 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.231989 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w"] Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.236827 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.239576 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.240532 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w"] Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.240458 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.240526 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.241981 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.242255 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.243968 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x"] Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.246634 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.250544 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.308390 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" event={"ID":"516aa61e-41f3-4866-ad1c-f4d87fbb9ebd","Type":"ContainerDied","Data":"3fc6ebf27167c3df9612877427a2c1879fc8aa95bab56dd402c3d1bbf423893b"} Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.308463 4931 scope.go:117] "RemoveContainer" containerID="4d67bd9342c84fb1ad92565804f20464bd57f2ca9134459d1dd179e2d44d5fed" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.308403 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.330275 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv"] Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.335626 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hv6zv"] Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.366858 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-config\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.367007 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-proxy-ca-bundles\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.367097 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-client-ca\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.367172 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2px\" (UniqueName: \"kubernetes.io/projected/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-kube-api-access-wn2px\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.367203 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-config\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.367232 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97302c5-e937-43bc-95c7-ce2652e62136-serving-cert\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.367253 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-serving-cert\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.367413 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwfnx\" (UniqueName: \"kubernetes.io/projected/b97302c5-e937-43bc-95c7-ce2652e62136-kube-api-access-dwfnx\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.367444 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-client-ca\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.468294 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-client-ca\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.468379 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-config\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.468413 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-proxy-ca-bundles\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.468431 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-client-ca\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.468453 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn2px\" (UniqueName: \"kubernetes.io/projected/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-kube-api-access-wn2px\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.468472 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-config\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.468492 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97302c5-e937-43bc-95c7-ce2652e62136-serving-cert\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.468506 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-serving-cert\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.468525 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwfnx\" (UniqueName: \"kubernetes.io/projected/b97302c5-e937-43bc-95c7-ce2652e62136-kube-api-access-dwfnx\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.469539 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-client-ca\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.470018 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-config\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.470921 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-proxy-ca-bundles\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.471220 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-client-ca\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.473020 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-config\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.474943 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97302c5-e937-43bc-95c7-ce2652e62136-serving-cert\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.475381 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-serving-cert\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.484186 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwfnx\" (UniqueName: \"kubernetes.io/projected/b97302c5-e937-43bc-95c7-ce2652e62136-kube-api-access-dwfnx\") pod \"route-controller-manager-54c6d46ddc-2mn9x\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.484801 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn2px\" (UniqueName: \"kubernetes.io/projected/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-kube-api-access-wn2px\") pod \"controller-manager-6d58c8dfd9-d7d5w\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.516550 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.557405 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.568783 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.742457 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x"] Jan 31 04:29:10 crc kubenswrapper[4931]: I0131 04:29:10.792308 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w"] Jan 31 04:29:10 crc kubenswrapper[4931]: W0131 04:29:10.796786 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd994d5fe_0a87_4ab5_a1e2_617ac213fa01.slice/crio-ebc7a36ec57f0328d1d4da1bb16ca7777810676192c3476029e3341a78565cef WatchSource:0}: Error finding container ebc7a36ec57f0328d1d4da1bb16ca7777810676192c3476029e3341a78565cef: Status 404 returned error can't find the container with id ebc7a36ec57f0328d1d4da1bb16ca7777810676192c3476029e3341a78565cef Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.317505 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" event={"ID":"b97302c5-e937-43bc-95c7-ce2652e62136","Type":"ContainerStarted","Data":"e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451"} Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.317843 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" event={"ID":"b97302c5-e937-43bc-95c7-ce2652e62136","Type":"ContainerStarted","Data":"f7b01545b873232d60ce2fa31bcc81529500949e9f6482acbcf59ca7dcf02fbf"} Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.317868 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.319233 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" event={"ID":"d994d5fe-0a87-4ab5-a1e2-617ac213fa01","Type":"ContainerStarted","Data":"b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7"} Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.319279 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" event={"ID":"d994d5fe-0a87-4ab5-a1e2-617ac213fa01","Type":"ContainerStarted","Data":"ebc7a36ec57f0328d1d4da1bb16ca7777810676192c3476029e3341a78565cef"} Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.319837 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.324112 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.334401 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" podStartSLOduration=2.334358839 podStartE2EDuration="2.334358839s" podCreationTimestamp="2026-01-31 04:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:29:11.332082375 +0000 UTC m=+310.141311249" watchObservedRunningTime="2026-01-31 04:29:11.334358839 +0000 UTC m=+310.143587713" Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.352657 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" podStartSLOduration=3.35263463 podStartE2EDuration="3.35263463s" podCreationTimestamp="2026-01-31 04:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:29:11.347618948 +0000 UTC m=+310.156847842" watchObservedRunningTime="2026-01-31 04:29:11.35263463 +0000 UTC m=+310.161863504" Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.492569 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:11 crc kubenswrapper[4931]: I0131 04:29:11.905599 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="516aa61e-41f3-4866-ad1c-f4d87fbb9ebd" path="/var/lib/kubelet/pods/516aa61e-41f3-4866-ad1c-f4d87fbb9ebd/volumes" Jan 31 04:29:12 crc kubenswrapper[4931]: I0131 04:29:12.328948 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 04:29:12 crc kubenswrapper[4931]: I0131 04:29:12.475678 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w"] Jan 31 04:29:12 crc kubenswrapper[4931]: I0131 04:29:12.504401 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x"] Jan 31 04:29:13 crc kubenswrapper[4931]: I0131 04:29:13.201101 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.335786 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" podUID="d994d5fe-0a87-4ab5-a1e2-617ac213fa01" containerName="controller-manager" containerID="cri-o://b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7" gracePeriod=30 Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.338412 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" podUID="b97302c5-e937-43bc-95c7-ce2652e62136" containerName="route-controller-manager" containerID="cri-o://e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451" gracePeriod=30 Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.811894 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.817203 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.892187 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-9h798"] Jan 31 04:29:14 crc kubenswrapper[4931]: E0131 04:29:14.892451 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97302c5-e937-43bc-95c7-ce2652e62136" containerName="route-controller-manager" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.892476 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97302c5-e937-43bc-95c7-ce2652e62136" containerName="route-controller-manager" Jan 31 04:29:14 crc kubenswrapper[4931]: E0131 04:29:14.892505 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d994d5fe-0a87-4ab5-a1e2-617ac213fa01" containerName="controller-manager" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.892514 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d994d5fe-0a87-4ab5-a1e2-617ac213fa01" containerName="controller-manager" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.892647 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d994d5fe-0a87-4ab5-a1e2-617ac213fa01" containerName="controller-manager" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.892666 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97302c5-e937-43bc-95c7-ce2652e62136" containerName="route-controller-manager" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.893120 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.908737 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-9h798"] Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.929459 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97302c5-e937-43bc-95c7-ce2652e62136-serving-cert\") pod \"b97302c5-e937-43bc-95c7-ce2652e62136\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.929508 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn2px\" (UniqueName: \"kubernetes.io/projected/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-kube-api-access-wn2px\") pod \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.929552 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwfnx\" (UniqueName: \"kubernetes.io/projected/b97302c5-e937-43bc-95c7-ce2652e62136-kube-api-access-dwfnx\") pod \"b97302c5-e937-43bc-95c7-ce2652e62136\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.929583 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-proxy-ca-bundles\") pod \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.929600 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-client-ca\") pod \"b97302c5-e937-43bc-95c7-ce2652e62136\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.929640 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-serving-cert\") pod \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.929660 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-config\") pod \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.929679 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-client-ca\") pod \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\" (UID: \"d994d5fe-0a87-4ab5-a1e2-617ac213fa01\") " Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.929707 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-config\") pod \"b97302c5-e937-43bc-95c7-ce2652e62136\" (UID: \"b97302c5-e937-43bc-95c7-ce2652e62136\") " Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.930893 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-client-ca" (OuterVolumeSpecName: "client-ca") pod "d994d5fe-0a87-4ab5-a1e2-617ac213fa01" (UID: "d994d5fe-0a87-4ab5-a1e2-617ac213fa01"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.931024 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-config" (OuterVolumeSpecName: "config") pod "d994d5fe-0a87-4ab5-a1e2-617ac213fa01" (UID: "d994d5fe-0a87-4ab5-a1e2-617ac213fa01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.931030 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-client-ca" (OuterVolumeSpecName: "client-ca") pod "b97302c5-e937-43bc-95c7-ce2652e62136" (UID: "b97302c5-e937-43bc-95c7-ce2652e62136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.931117 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-config" (OuterVolumeSpecName: "config") pod "b97302c5-e937-43bc-95c7-ce2652e62136" (UID: "b97302c5-e937-43bc-95c7-ce2652e62136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.931130 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d994d5fe-0a87-4ab5-a1e2-617ac213fa01" (UID: "d994d5fe-0a87-4ab5-a1e2-617ac213fa01"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.937173 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-kube-api-access-wn2px" (OuterVolumeSpecName: "kube-api-access-wn2px") pod "d994d5fe-0a87-4ab5-a1e2-617ac213fa01" (UID: "d994d5fe-0a87-4ab5-a1e2-617ac213fa01"). InnerVolumeSpecName "kube-api-access-wn2px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.937217 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97302c5-e937-43bc-95c7-ce2652e62136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b97302c5-e937-43bc-95c7-ce2652e62136" (UID: "b97302c5-e937-43bc-95c7-ce2652e62136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.937224 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d994d5fe-0a87-4ab5-a1e2-617ac213fa01" (UID: "d994d5fe-0a87-4ab5-a1e2-617ac213fa01"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:29:14 crc kubenswrapper[4931]: I0131 04:29:14.937643 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97302c5-e937-43bc-95c7-ce2652e62136-kube-api-access-dwfnx" (OuterVolumeSpecName: "kube-api-access-dwfnx") pod "b97302c5-e937-43bc-95c7-ce2652e62136" (UID: "b97302c5-e937-43bc-95c7-ce2652e62136"). InnerVolumeSpecName "kube-api-access-dwfnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.031891 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcwjg\" (UniqueName: \"kubernetes.io/projected/df41a5f8-4bc3-41a5-be02-96c471389e9e-kube-api-access-wcwjg\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.031966 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-client-ca\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.031988 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df41a5f8-4bc3-41a5-be02-96c471389e9e-serving-cert\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032028 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-config\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032116 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032131 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032142 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032154 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032165 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97302c5-e937-43bc-95c7-ce2652e62136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032178 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn2px\" (UniqueName: \"kubernetes.io/projected/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-kube-api-access-wn2px\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032194 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwfnx\" (UniqueName: \"kubernetes.io/projected/b97302c5-e937-43bc-95c7-ce2652e62136-kube-api-access-dwfnx\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032207 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d994d5fe-0a87-4ab5-a1e2-617ac213fa01-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.032219 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b97302c5-e937-43bc-95c7-ce2652e62136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.132931 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcwjg\" (UniqueName: \"kubernetes.io/projected/df41a5f8-4bc3-41a5-be02-96c471389e9e-kube-api-access-wcwjg\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.132991 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-client-ca\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.133023 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df41a5f8-4bc3-41a5-be02-96c471389e9e-serving-cert\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.133051 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-config\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.134400 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-config\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.134413 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-client-ca\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.138255 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df41a5f8-4bc3-41a5-be02-96c471389e9e-serving-cert\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.149334 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcwjg\" (UniqueName: \"kubernetes.io/projected/df41a5f8-4bc3-41a5-be02-96c471389e9e-kube-api-access-wcwjg\") pod \"route-controller-manager-85776c4794-9h798\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.210048 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.342423 4931 generic.go:334] "Generic (PLEG): container finished" podID="d994d5fe-0a87-4ab5-a1e2-617ac213fa01" containerID="b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7" exitCode=0 Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.342593 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" event={"ID":"d994d5fe-0a87-4ab5-a1e2-617ac213fa01","Type":"ContainerDied","Data":"b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7"} Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.342897 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" event={"ID":"d994d5fe-0a87-4ab5-a1e2-617ac213fa01","Type":"ContainerDied","Data":"ebc7a36ec57f0328d1d4da1bb16ca7777810676192c3476029e3341a78565cef"} Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.342700 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.342938 4931 scope.go:117] "RemoveContainer" containerID="b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.345929 4931 generic.go:334] "Generic (PLEG): container finished" podID="b97302c5-e937-43bc-95c7-ce2652e62136" containerID="e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451" exitCode=0 Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.345998 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.346002 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" event={"ID":"b97302c5-e937-43bc-95c7-ce2652e62136","Type":"ContainerDied","Data":"e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451"} Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.346326 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x" event={"ID":"b97302c5-e937-43bc-95c7-ce2652e62136","Type":"ContainerDied","Data":"f7b01545b873232d60ce2fa31bcc81529500949e9f6482acbcf59ca7dcf02fbf"} Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.378341 4931 scope.go:117] "RemoveContainer" containerID="b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7" Jan 31 04:29:15 crc kubenswrapper[4931]: E0131 04:29:15.378812 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7\": container with ID starting with b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7 not found: ID does not exist" containerID="b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.378845 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7"} err="failed to get container status \"b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7\": rpc error: code = NotFound desc = could not find container \"b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7\": container with ID starting with b857871e60a0299adaf6237b69e6e9e8fc862a139e62b2ff5b7b419f22fa87c7 not found: ID does not exist" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.378874 4931 scope.go:117] "RemoveContainer" containerID="e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.390577 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w"] Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.393885 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d58c8dfd9-d7d5w"] Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.400054 4931 scope.go:117] "RemoveContainer" containerID="e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451" Jan 31 04:29:15 crc kubenswrapper[4931]: E0131 04:29:15.404149 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451\": container with ID starting with e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451 not found: ID does not exist" containerID="e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.404209 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451"} err="failed to get container status \"e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451\": rpc error: code = NotFound desc = could not find container \"e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451\": container with ID starting with e3d01c9a30f7492b694c9d15dcd58b4c8ffeb987493d7e04689dcd1896dfc451 not found: ID does not exist" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.410802 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x"] Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.410847 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c6d46ddc-2mn9x"] Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.437169 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-9h798"] Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.903564 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b97302c5-e937-43bc-95c7-ce2652e62136" path="/var/lib/kubelet/pods/b97302c5-e937-43bc-95c7-ce2652e62136/volumes" Jan 31 04:29:15 crc kubenswrapper[4931]: I0131 04:29:15.904565 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d994d5fe-0a87-4ab5-a1e2-617ac213fa01" path="/var/lib/kubelet/pods/d994d5fe-0a87-4ab5-a1e2-617ac213fa01/volumes" Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.284648 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brnp5"] Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.284993 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-brnp5" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" containerName="registry-server" containerID="cri-o://83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7" gracePeriod=2 Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.352125 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" event={"ID":"df41a5f8-4bc3-41a5-be02-96c471389e9e","Type":"ContainerStarted","Data":"f3fd4944906356146467814b884b095c35b3481e0d7345bc2e792dec141c8eb8"} Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.352174 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" event={"ID":"df41a5f8-4bc3-41a5-be02-96c471389e9e","Type":"ContainerStarted","Data":"8fe4efee86f931aa1a767a946540ef5ed8e8c55f616ecb49b489ff6f05b42a0d"} Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.352341 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.357957 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.373503 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" podStartSLOduration=4.373480885 podStartE2EDuration="4.373480885s" podCreationTimestamp="2026-01-31 04:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:29:16.371402277 +0000 UTC m=+315.180631151" watchObservedRunningTime="2026-01-31 04:29:16.373480885 +0000 UTC m=+315.182709759" Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.657643 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.856108 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-catalog-content\") pod \"c3f1936c-4896-4468-b5c3-958691b633b7\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.856190 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-utilities\") pod \"c3f1936c-4896-4468-b5c3-958691b633b7\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.856250 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp96j\" (UniqueName: \"kubernetes.io/projected/c3f1936c-4896-4468-b5c3-958691b633b7-kube-api-access-jp96j\") pod \"c3f1936c-4896-4468-b5c3-958691b633b7\" (UID: \"c3f1936c-4896-4468-b5c3-958691b633b7\") " Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.857480 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-utilities" (OuterVolumeSpecName: "utilities") pod "c3f1936c-4896-4468-b5c3-958691b633b7" (UID: "c3f1936c-4896-4468-b5c3-958691b633b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.862976 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f1936c-4896-4468-b5c3-958691b633b7-kube-api-access-jp96j" (OuterVolumeSpecName: "kube-api-access-jp96j") pod "c3f1936c-4896-4468-b5c3-958691b633b7" (UID: "c3f1936c-4896-4468-b5c3-958691b633b7"). InnerVolumeSpecName "kube-api-access-jp96j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.957709 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.957777 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp96j\" (UniqueName: \"kubernetes.io/projected/c3f1936c-4896-4468-b5c3-958691b633b7-kube-api-access-jp96j\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:16 crc kubenswrapper[4931]: I0131 04:29:16.968573 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3f1936c-4896-4468-b5c3-958691b633b7" (UID: "c3f1936c-4896-4468-b5c3-958691b633b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.059306 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3f1936c-4896-4468-b5c3-958691b633b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.238631 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9"] Jan 31 04:29:17 crc kubenswrapper[4931]: E0131 04:29:17.238976 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" containerName="extract-utilities" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.239002 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" containerName="extract-utilities" Jan 31 04:29:17 crc kubenswrapper[4931]: E0131 04:29:17.239030 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" containerName="extract-content" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.239042 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" containerName="extract-content" Jan 31 04:29:17 crc kubenswrapper[4931]: E0131 04:29:17.239064 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" containerName="registry-server" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.239077 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" containerName="registry-server" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.239234 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" containerName="registry-server" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.240780 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.244973 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.245088 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.245922 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.246240 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.246499 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.245415 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.255031 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.260675 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9"] Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.261572 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-proxy-ca-bundles\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.264217 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de9b46fb-9755-496b-8985-27e234a18cfe-serving-cert\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.264341 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsfd\" (UniqueName: \"kubernetes.io/projected/de9b46fb-9755-496b-8985-27e234a18cfe-kube-api-access-vxsfd\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.264393 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-client-ca\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.264438 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-config\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.362566 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3f1936c-4896-4468-b5c3-958691b633b7" containerID="83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7" exitCode=0 Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.362634 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brnp5" event={"ID":"c3f1936c-4896-4468-b5c3-958691b633b7","Type":"ContainerDied","Data":"83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7"} Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.362664 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brnp5" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.362706 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brnp5" event={"ID":"c3f1936c-4896-4468-b5c3-958691b633b7","Type":"ContainerDied","Data":"f8735025ac4188aab7a41004864c014eb1afe8791c4ad73c31bcc4b3fe9bdfc5"} Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.362796 4931 scope.go:117] "RemoveContainer" containerID="83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.365323 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de9b46fb-9755-496b-8985-27e234a18cfe-serving-cert\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.365414 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxsfd\" (UniqueName: \"kubernetes.io/projected/de9b46fb-9755-496b-8985-27e234a18cfe-kube-api-access-vxsfd\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.365470 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-client-ca\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.365497 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-config\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.365576 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-proxy-ca-bundles\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.367521 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-proxy-ca-bundles\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.367821 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-config\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.368903 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-client-ca\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.369411 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de9b46fb-9755-496b-8985-27e234a18cfe-serving-cert\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.384147 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxsfd\" (UniqueName: \"kubernetes.io/projected/de9b46fb-9755-496b-8985-27e234a18cfe-kube-api-access-vxsfd\") pod \"controller-manager-6dfcd5c5b4-zctk9\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.441592 4931 scope.go:117] "RemoveContainer" containerID="91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.449884 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brnp5"] Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.453250 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-brnp5"] Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.464992 4931 scope.go:117] "RemoveContainer" containerID="99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.489090 4931 scope.go:117] "RemoveContainer" containerID="83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7" Jan 31 04:29:17 crc kubenswrapper[4931]: E0131 04:29:17.489895 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7\": container with ID starting with 83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7 not found: ID does not exist" containerID="83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.489979 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7"} err="failed to get container status \"83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7\": rpc error: code = NotFound desc = could not find container \"83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7\": container with ID starting with 83290a38c24051df2637dae4750c2dc49a5c6565e4ce514b5708146447bda3a7 not found: ID does not exist" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.490039 4931 scope.go:117] "RemoveContainer" containerID="91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c" Jan 31 04:29:17 crc kubenswrapper[4931]: E0131 04:29:17.490625 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c\": container with ID starting with 91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c not found: ID does not exist" containerID="91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.490676 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c"} err="failed to get container status \"91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c\": rpc error: code = NotFound desc = could not find container \"91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c\": container with ID starting with 91f90cb221ba368f84bfa5b2c438df980d2cc9eb36cfa3fec4efec05c385863c not found: ID does not exist" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.490715 4931 scope.go:117] "RemoveContainer" containerID="99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8" Jan 31 04:29:17 crc kubenswrapper[4931]: E0131 04:29:17.491154 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8\": container with ID starting with 99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8 not found: ID does not exist" containerID="99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.491187 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8"} err="failed to get container status \"99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8\": rpc error: code = NotFound desc = could not find container \"99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8\": container with ID starting with 99101b70a17dd0f6c5a2d91ed4363b4449c94afe1afd06cc30180d6f581959e8 not found: ID does not exist" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.510943 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.586784 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.793407 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9"] Jan 31 04:29:17 crc kubenswrapper[4931]: W0131 04:29:17.799128 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde9b46fb_9755_496b_8985_27e234a18cfe.slice/crio-01b85ba231476c8539130f8b5b6e1ec1c5c640326300221bf5d694c776412348 WatchSource:0}: Error finding container 01b85ba231476c8539130f8b5b6e1ec1c5c640326300221bf5d694c776412348: Status 404 returned error can't find the container with id 01b85ba231476c8539130f8b5b6e1ec1c5c640326300221bf5d694c776412348 Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.902365 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f1936c-4896-4468-b5c3-958691b633b7" path="/var/lib/kubelet/pods/c3f1936c-4896-4468-b5c3-958691b633b7/volumes" Jan 31 04:29:17 crc kubenswrapper[4931]: I0131 04:29:17.922167 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 04:29:18 crc kubenswrapper[4931]: I0131 04:29:18.346780 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 04:29:18 crc kubenswrapper[4931]: I0131 04:29:18.368553 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" event={"ID":"de9b46fb-9755-496b-8985-27e234a18cfe","Type":"ContainerStarted","Data":"a593448da46992bee1d3e355abc2b9bc83d4f73b958279415aff6c9dcafd484c"} Jan 31 04:29:18 crc kubenswrapper[4931]: I0131 04:29:18.368627 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" event={"ID":"de9b46fb-9755-496b-8985-27e234a18cfe","Type":"ContainerStarted","Data":"01b85ba231476c8539130f8b5b6e1ec1c5c640326300221bf5d694c776412348"} Jan 31 04:29:18 crc kubenswrapper[4931]: I0131 04:29:18.368652 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:18 crc kubenswrapper[4931]: I0131 04:29:18.383592 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:29:18 crc kubenswrapper[4931]: I0131 04:29:18.393822 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" podStartSLOduration=6.393803524 podStartE2EDuration="6.393803524s" podCreationTimestamp="2026-01-31 04:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:29:18.390410284 +0000 UTC m=+317.199639178" watchObservedRunningTime="2026-01-31 04:29:18.393803524 +0000 UTC m=+317.203032388" Jan 31 04:29:24 crc kubenswrapper[4931]: I0131 04:29:24.894076 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.167613 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx"] Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.168804 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.171501 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.172489 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.218119 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx"] Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.307825 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d79dee79-5a07-4f91-91fa-c845a0ceafcc-secret-volume\") pod \"collect-profiles-29497230-t68fx\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.307903 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d79dee79-5a07-4f91-91fa-c845a0ceafcc-config-volume\") pod \"collect-profiles-29497230-t68fx\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.307934 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvfv\" (UniqueName: \"kubernetes.io/projected/d79dee79-5a07-4f91-91fa-c845a0ceafcc-kube-api-access-qqvfv\") pod \"collect-profiles-29497230-t68fx\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.409695 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d79dee79-5a07-4f91-91fa-c845a0ceafcc-config-volume\") pod \"collect-profiles-29497230-t68fx\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.409801 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvfv\" (UniqueName: \"kubernetes.io/projected/d79dee79-5a07-4f91-91fa-c845a0ceafcc-kube-api-access-qqvfv\") pod \"collect-profiles-29497230-t68fx\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.409903 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d79dee79-5a07-4f91-91fa-c845a0ceafcc-secret-volume\") pod \"collect-profiles-29497230-t68fx\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.410627 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d79dee79-5a07-4f91-91fa-c845a0ceafcc-config-volume\") pod \"collect-profiles-29497230-t68fx\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.416041 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d79dee79-5a07-4f91-91fa-c845a0ceafcc-secret-volume\") pod \"collect-profiles-29497230-t68fx\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.431782 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvfv\" (UniqueName: \"kubernetes.io/projected/d79dee79-5a07-4f91-91fa-c845a0ceafcc-kube-api-access-qqvfv\") pod \"collect-profiles-29497230-t68fx\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.493671 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:00 crc kubenswrapper[4931]: I0131 04:30:00.879491 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx"] Jan 31 04:30:01 crc kubenswrapper[4931]: I0131 04:30:01.606386 4931 generic.go:334] "Generic (PLEG): container finished" podID="d79dee79-5a07-4f91-91fa-c845a0ceafcc" containerID="d1864a8c9312630a358e22ff8308f1baa60c4ec76db51d8fb5097c924f2a24f9" exitCode=0 Jan 31 04:30:01 crc kubenswrapper[4931]: I0131 04:30:01.606431 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" event={"ID":"d79dee79-5a07-4f91-91fa-c845a0ceafcc","Type":"ContainerDied","Data":"d1864a8c9312630a358e22ff8308f1baa60c4ec76db51d8fb5097c924f2a24f9"} Jan 31 04:30:01 crc kubenswrapper[4931]: I0131 04:30:01.606661 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" event={"ID":"d79dee79-5a07-4f91-91fa-c845a0ceafcc","Type":"ContainerStarted","Data":"385456aab53f1b55dfee54f566776d9beac8589976c2a905729f64c49cdd99f5"} Jan 31 04:30:02 crc kubenswrapper[4931]: I0131 04:30:02.852596 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:02 crc kubenswrapper[4931]: I0131 04:30:02.940614 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d79dee79-5a07-4f91-91fa-c845a0ceafcc-config-volume\") pod \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " Jan 31 04:30:02 crc kubenswrapper[4931]: I0131 04:30:02.941078 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqvfv\" (UniqueName: \"kubernetes.io/projected/d79dee79-5a07-4f91-91fa-c845a0ceafcc-kube-api-access-qqvfv\") pod \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " Jan 31 04:30:02 crc kubenswrapper[4931]: I0131 04:30:02.941145 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d79dee79-5a07-4f91-91fa-c845a0ceafcc-secret-volume\") pod \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\" (UID: \"d79dee79-5a07-4f91-91fa-c845a0ceafcc\") " Jan 31 04:30:02 crc kubenswrapper[4931]: I0131 04:30:02.941207 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79dee79-5a07-4f91-91fa-c845a0ceafcc-config-volume" (OuterVolumeSpecName: "config-volume") pod "d79dee79-5a07-4f91-91fa-c845a0ceafcc" (UID: "d79dee79-5a07-4f91-91fa-c845a0ceafcc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:30:02 crc kubenswrapper[4931]: I0131 04:30:02.941488 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d79dee79-5a07-4f91-91fa-c845a0ceafcc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:02 crc kubenswrapper[4931]: I0131 04:30:02.947476 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79dee79-5a07-4f91-91fa-c845a0ceafcc-kube-api-access-qqvfv" (OuterVolumeSpecName: "kube-api-access-qqvfv") pod "d79dee79-5a07-4f91-91fa-c845a0ceafcc" (UID: "d79dee79-5a07-4f91-91fa-c845a0ceafcc"). InnerVolumeSpecName "kube-api-access-qqvfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:02 crc kubenswrapper[4931]: I0131 04:30:02.956887 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79dee79-5a07-4f91-91fa-c845a0ceafcc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d79dee79-5a07-4f91-91fa-c845a0ceafcc" (UID: "d79dee79-5a07-4f91-91fa-c845a0ceafcc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:30:03 crc kubenswrapper[4931]: I0131 04:30:03.042532 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqvfv\" (UniqueName: \"kubernetes.io/projected/d79dee79-5a07-4f91-91fa-c845a0ceafcc-kube-api-access-qqvfv\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:03 crc kubenswrapper[4931]: I0131 04:30:03.042565 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d79dee79-5a07-4f91-91fa-c845a0ceafcc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:03 crc kubenswrapper[4931]: I0131 04:30:03.616335 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" event={"ID":"d79dee79-5a07-4f91-91fa-c845a0ceafcc","Type":"ContainerDied","Data":"385456aab53f1b55dfee54f566776d9beac8589976c2a905729f64c49cdd99f5"} Jan 31 04:30:03 crc kubenswrapper[4931]: I0131 04:30:03.616372 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="385456aab53f1b55dfee54f566776d9beac8589976c2a905729f64c49cdd99f5" Jan 31 04:30:03 crc kubenswrapper[4931]: I0131 04:30:03.616407 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-t68fx" Jan 31 04:30:08 crc kubenswrapper[4931]: I0131 04:30:08.949524 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9"] Jan 31 04:30:08 crc kubenswrapper[4931]: I0131 04:30:08.950935 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" podUID="de9b46fb-9755-496b-8985-27e234a18cfe" containerName="controller-manager" containerID="cri-o://a593448da46992bee1d3e355abc2b9bc83d4f73b958279415aff6c9dcafd484c" gracePeriod=30 Jan 31 04:30:08 crc kubenswrapper[4931]: I0131 04:30:08.976563 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-9h798"] Jan 31 04:30:08 crc kubenswrapper[4931]: I0131 04:30:08.976790 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" podUID="df41a5f8-4bc3-41a5-be02-96c471389e9e" containerName="route-controller-manager" containerID="cri-o://f3fd4944906356146467814b884b095c35b3481e0d7345bc2e792dec141c8eb8" gracePeriod=30 Jan 31 04:30:09 crc kubenswrapper[4931]: I0131 04:30:09.674223 4931 generic.go:334] "Generic (PLEG): container finished" podID="de9b46fb-9755-496b-8985-27e234a18cfe" containerID="a593448da46992bee1d3e355abc2b9bc83d4f73b958279415aff6c9dcafd484c" exitCode=0 Jan 31 04:30:09 crc kubenswrapper[4931]: I0131 04:30:09.674316 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" event={"ID":"de9b46fb-9755-496b-8985-27e234a18cfe","Type":"ContainerDied","Data":"a593448da46992bee1d3e355abc2b9bc83d4f73b958279415aff6c9dcafd484c"} Jan 31 04:30:09 crc kubenswrapper[4931]: I0131 04:30:09.675693 4931 generic.go:334] "Generic (PLEG): container finished" podID="df41a5f8-4bc3-41a5-be02-96c471389e9e" containerID="f3fd4944906356146467814b884b095c35b3481e0d7345bc2e792dec141c8eb8" exitCode=0 Jan 31 04:30:09 crc kubenswrapper[4931]: I0131 04:30:09.675744 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" event={"ID":"df41a5f8-4bc3-41a5-be02-96c471389e9e","Type":"ContainerDied","Data":"f3fd4944906356146467814b884b095c35b3481e0d7345bc2e792dec141c8eb8"} Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.011809 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.015103 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.043100 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-868dcd769b-q9bzr"] Jan 31 04:30:10 crc kubenswrapper[4931]: E0131 04:30:10.043364 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9b46fb-9755-496b-8985-27e234a18cfe" containerName="controller-manager" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.043386 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9b46fb-9755-496b-8985-27e234a18cfe" containerName="controller-manager" Jan 31 04:30:10 crc kubenswrapper[4931]: E0131 04:30:10.043398 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79dee79-5a07-4f91-91fa-c845a0ceafcc" containerName="collect-profiles" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.043408 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79dee79-5a07-4f91-91fa-c845a0ceafcc" containerName="collect-profiles" Jan 31 04:30:10 crc kubenswrapper[4931]: E0131 04:30:10.043422 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df41a5f8-4bc3-41a5-be02-96c471389e9e" containerName="route-controller-manager" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.043432 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="df41a5f8-4bc3-41a5-be02-96c471389e9e" containerName="route-controller-manager" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.043545 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="df41a5f8-4bc3-41a5-be02-96c471389e9e" containerName="route-controller-manager" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.043559 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9b46fb-9755-496b-8985-27e234a18cfe" containerName="controller-manager" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.043574 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79dee79-5a07-4f91-91fa-c845a0ceafcc" containerName="collect-profiles" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.044002 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.062380 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868dcd769b-q9bzr"] Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.134900 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcwjg\" (UniqueName: \"kubernetes.io/projected/df41a5f8-4bc3-41a5-be02-96c471389e9e-kube-api-access-wcwjg\") pod \"df41a5f8-4bc3-41a5-be02-96c471389e9e\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.135066 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-client-ca\") pod \"de9b46fb-9755-496b-8985-27e234a18cfe\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.135097 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxsfd\" (UniqueName: \"kubernetes.io/projected/de9b46fb-9755-496b-8985-27e234a18cfe-kube-api-access-vxsfd\") pod \"de9b46fb-9755-496b-8985-27e234a18cfe\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136054 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-config\") pod \"de9b46fb-9755-496b-8985-27e234a18cfe\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136093 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de9b46fb-9755-496b-8985-27e234a18cfe-serving-cert\") pod \"de9b46fb-9755-496b-8985-27e234a18cfe\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136190 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-proxy-ca-bundles\") pod \"de9b46fb-9755-496b-8985-27e234a18cfe\" (UID: \"de9b46fb-9755-496b-8985-27e234a18cfe\") " Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136218 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-client-ca\") pod \"df41a5f8-4bc3-41a5-be02-96c471389e9e\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136251 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df41a5f8-4bc3-41a5-be02-96c471389e9e-serving-cert\") pod \"df41a5f8-4bc3-41a5-be02-96c471389e9e\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136274 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-config\") pod \"df41a5f8-4bc3-41a5-be02-96c471389e9e\" (UID: \"df41a5f8-4bc3-41a5-be02-96c471389e9e\") " Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136438 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-config\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136465 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-kube-api-access-2nbvg\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136572 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-client-ca\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136619 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-serving-cert\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136666 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-proxy-ca-bundles\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136676 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-config" (OuterVolumeSpecName: "config") pod "de9b46fb-9755-496b-8985-27e234a18cfe" (UID: "de9b46fb-9755-496b-8985-27e234a18cfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136737 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-client-ca" (OuterVolumeSpecName: "client-ca") pod "de9b46fb-9755-496b-8985-27e234a18cfe" (UID: "de9b46fb-9755-496b-8985-27e234a18cfe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.136685 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "de9b46fb-9755-496b-8985-27e234a18cfe" (UID: "de9b46fb-9755-496b-8985-27e234a18cfe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.138000 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-client-ca" (OuterVolumeSpecName: "client-ca") pod "df41a5f8-4bc3-41a5-be02-96c471389e9e" (UID: "df41a5f8-4bc3-41a5-be02-96c471389e9e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.138118 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.138136 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.138149 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de9b46fb-9755-496b-8985-27e234a18cfe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.138152 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-config" (OuterVolumeSpecName: "config") pod "df41a5f8-4bc3-41a5-be02-96c471389e9e" (UID: "df41a5f8-4bc3-41a5-be02-96c471389e9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.141980 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de9b46fb-9755-496b-8985-27e234a18cfe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de9b46fb-9755-496b-8985-27e234a18cfe" (UID: "de9b46fb-9755-496b-8985-27e234a18cfe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.142037 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9b46fb-9755-496b-8985-27e234a18cfe-kube-api-access-vxsfd" (OuterVolumeSpecName: "kube-api-access-vxsfd") pod "de9b46fb-9755-496b-8985-27e234a18cfe" (UID: "de9b46fb-9755-496b-8985-27e234a18cfe"). InnerVolumeSpecName "kube-api-access-vxsfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.155944 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df41a5f8-4bc3-41a5-be02-96c471389e9e-kube-api-access-wcwjg" (OuterVolumeSpecName: "kube-api-access-wcwjg") pod "df41a5f8-4bc3-41a5-be02-96c471389e9e" (UID: "df41a5f8-4bc3-41a5-be02-96c471389e9e"). InnerVolumeSpecName "kube-api-access-wcwjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.155997 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df41a5f8-4bc3-41a5-be02-96c471389e9e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df41a5f8-4bc3-41a5-be02-96c471389e9e" (UID: "df41a5f8-4bc3-41a5-be02-96c471389e9e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240157 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-config\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240208 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-kube-api-access-2nbvg\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240273 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-client-ca\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240321 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-serving-cert\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240354 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-proxy-ca-bundles\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240438 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxsfd\" (UniqueName: \"kubernetes.io/projected/de9b46fb-9755-496b-8985-27e234a18cfe-kube-api-access-vxsfd\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240469 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de9b46fb-9755-496b-8985-27e234a18cfe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240480 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240489 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df41a5f8-4bc3-41a5-be02-96c471389e9e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240497 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df41a5f8-4bc3-41a5-be02-96c471389e9e-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.240507 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcwjg\" (UniqueName: \"kubernetes.io/projected/df41a5f8-4bc3-41a5-be02-96c471389e9e-kube-api-access-wcwjg\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.241818 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-client-ca\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.242044 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-config\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.242160 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-proxy-ca-bundles\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.247312 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-serving-cert\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.260802 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/7b4d59d6-311b-447b-a61b-42bcdf4e3b4b-kube-api-access-2nbvg\") pod \"controller-manager-868dcd769b-q9bzr\" (UID: \"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b\") " pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.271985 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss"] Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.272756 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.284847 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss"] Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.358428 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.442825 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9b116a-a1e3-455e-976e-c9a255f28cef-config\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.443321 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbd8q\" (UniqueName: \"kubernetes.io/projected/4f9b116a-a1e3-455e-976e-c9a255f28cef-kube-api-access-nbd8q\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.443425 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9b116a-a1e3-455e-976e-c9a255f28cef-serving-cert\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.443474 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f9b116a-a1e3-455e-976e-c9a255f28cef-client-ca\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.544363 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9b116a-a1e3-455e-976e-c9a255f28cef-serving-cert\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.544421 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f9b116a-a1e3-455e-976e-c9a255f28cef-client-ca\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.544499 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9b116a-a1e3-455e-976e-c9a255f28cef-config\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.544536 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbd8q\" (UniqueName: \"kubernetes.io/projected/4f9b116a-a1e3-455e-976e-c9a255f28cef-kube-api-access-nbd8q\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.547017 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f9b116a-a1e3-455e-976e-c9a255f28cef-client-ca\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.553477 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9b116a-a1e3-455e-976e-c9a255f28cef-config\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.554986 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9b116a-a1e3-455e-976e-c9a255f28cef-serving-cert\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.564533 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbd8q\" (UniqueName: \"kubernetes.io/projected/4f9b116a-a1e3-455e-976e-c9a255f28cef-kube-api-access-nbd8q\") pod \"route-controller-manager-786b75fb9b-sj4ss\" (UID: \"4f9b116a-a1e3-455e-976e-c9a255f28cef\") " pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.604762 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.685456 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" event={"ID":"df41a5f8-4bc3-41a5-be02-96c471389e9e","Type":"ContainerDied","Data":"8fe4efee86f931aa1a767a946540ef5ed8e8c55f616ecb49b489ff6f05b42a0d"} Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.685512 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-9h798" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.685540 4931 scope.go:117] "RemoveContainer" containerID="f3fd4944906356146467814b884b095c35b3481e0d7345bc2e792dec141c8eb8" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.688932 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" event={"ID":"de9b46fb-9755-496b-8985-27e234a18cfe","Type":"ContainerDied","Data":"01b85ba231476c8539130f8b5b6e1ec1c5c640326300221bf5d694c776412348"} Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.688986 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.709572 4931 scope.go:117] "RemoveContainer" containerID="a593448da46992bee1d3e355abc2b9bc83d4f73b958279415aff6c9dcafd484c" Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.721917 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9"] Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.724863 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-zctk9"] Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.731648 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-9h798"] Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.739169 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-9h798"] Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.814379 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss"] Jan 31 04:30:10 crc kubenswrapper[4931]: W0131 04:30:10.821536 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f9b116a_a1e3_455e_976e_c9a255f28cef.slice/crio-3722ae0182f539d515959bbfd5237e55302a21e43cb9274b6c3ffcbdcf19d482 WatchSource:0}: Error finding container 3722ae0182f539d515959bbfd5237e55302a21e43cb9274b6c3ffcbdcf19d482: Status 404 returned error can't find the container with id 3722ae0182f539d515959bbfd5237e55302a21e43cb9274b6c3ffcbdcf19d482 Jan 31 04:30:10 crc kubenswrapper[4931]: I0131 04:30:10.822571 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868dcd769b-q9bzr"] Jan 31 04:30:10 crc kubenswrapper[4931]: W0131 04:30:10.828011 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b4d59d6_311b_447b_a61b_42bcdf4e3b4b.slice/crio-5e4d5e2033cf164ef3f341c3a4f26d55ebcb7200e8a004e4ac8b77ed20c2027a WatchSource:0}: Error finding container 5e4d5e2033cf164ef3f341c3a4f26d55ebcb7200e8a004e4ac8b77ed20c2027a: Status 404 returned error can't find the container with id 5e4d5e2033cf164ef3f341c3a4f26d55ebcb7200e8a004e4ac8b77ed20c2027a Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.694650 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" event={"ID":"4f9b116a-a1e3-455e-976e-c9a255f28cef","Type":"ContainerStarted","Data":"07c538b5f5e5b575cf73af514fb91dab00404328d4a4c29de9a6abf259a10e0e"} Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.695264 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.695282 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" event={"ID":"4f9b116a-a1e3-455e-976e-c9a255f28cef","Type":"ContainerStarted","Data":"3722ae0182f539d515959bbfd5237e55302a21e43cb9274b6c3ffcbdcf19d482"} Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.695974 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" event={"ID":"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b","Type":"ContainerStarted","Data":"b7ad7b4ed9d08c05b912b2ea9c3102dc6dc3502a8cba1e06b0f791676ac540fb"} Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.696028 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" event={"ID":"7b4d59d6-311b-447b-a61b-42bcdf4e3b4b","Type":"ContainerStarted","Data":"5e4d5e2033cf164ef3f341c3a4f26d55ebcb7200e8a004e4ac8b77ed20c2027a"} Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.696056 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.699805 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.700026 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.713651 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-786b75fb9b-sj4ss" podStartSLOduration=2.713636234 podStartE2EDuration="2.713636234s" podCreationTimestamp="2026-01-31 04:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:30:11.711143477 +0000 UTC m=+370.520372351" watchObservedRunningTime="2026-01-31 04:30:11.713636234 +0000 UTC m=+370.522865108" Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.757576 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-868dcd769b-q9bzr" podStartSLOduration=3.7575547350000003 podStartE2EDuration="3.757554735s" podCreationTimestamp="2026-01-31 04:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:30:11.754348946 +0000 UTC m=+370.563577830" watchObservedRunningTime="2026-01-31 04:30:11.757554735 +0000 UTC m=+370.566783619" Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.905247 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9b46fb-9755-496b-8985-27e234a18cfe" path="/var/lib/kubelet/pods/de9b46fb-9755-496b-8985-27e234a18cfe/volumes" Jan 31 04:30:11 crc kubenswrapper[4931]: I0131 04:30:11.905859 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df41a5f8-4bc3-41a5-be02-96c471389e9e" path="/var/lib/kubelet/pods/df41a5f8-4bc3-41a5-be02-96c471389e9e/volumes" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.324901 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfs9t"] Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.325768 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sfs9t" podUID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerName="registry-server" containerID="cri-o://571cc45d42c9095da3bfe4102898f92ca604405d362d0bda47c8877f31479363" gracePeriod=30 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.332112 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2h5z9"] Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.332383 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2h5z9" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerName="registry-server" containerID="cri-o://10ff3262489b5910235604a2ea9b45d585275d72194510c26fe8a8e3a3974227" gracePeriod=30 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.338507 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drqrf"] Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.338750 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" containerName="marketplace-operator" containerID="cri-o://4c66077881cec64331ed6c4fa597771bef3329e8c8eb6639da57807e9f0b3b4c" gracePeriod=30 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.351794 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmxsb"] Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.352054 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jmxsb" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerName="registry-server" containerID="cri-o://234befd7655a7a7aaed306a0b21c774079633c1f2c7fe2e9df5b6eb04959ad8b" gracePeriod=30 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.364370 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wpx9n"] Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.364733 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wpx9n" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" containerName="registry-server" containerID="cri-o://8ce5632c2824db6fb1e2484b806fa5bb36ec7e90427d98302b1a074fc20e1e70" gracePeriod=30 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.373653 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-26slj"] Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.374679 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.388208 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-26slj"] Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.549398 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkk7\" (UniqueName: \"kubernetes.io/projected/fd940fdb-3b83-421e-bdaa-5a238a9bb908-kube-api-access-kdkk7\") pod \"marketplace-operator-79b997595-26slj\" (UID: \"fd940fdb-3b83-421e-bdaa-5a238a9bb908\") " pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.549521 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd940fdb-3b83-421e-bdaa-5a238a9bb908-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-26slj\" (UID: \"fd940fdb-3b83-421e-bdaa-5a238a9bb908\") " pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.549584 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd940fdb-3b83-421e-bdaa-5a238a9bb908-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-26slj\" (UID: \"fd940fdb-3b83-421e-bdaa-5a238a9bb908\") " pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.650649 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdkk7\" (UniqueName: \"kubernetes.io/projected/fd940fdb-3b83-421e-bdaa-5a238a9bb908-kube-api-access-kdkk7\") pod \"marketplace-operator-79b997595-26slj\" (UID: \"fd940fdb-3b83-421e-bdaa-5a238a9bb908\") " pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.650756 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd940fdb-3b83-421e-bdaa-5a238a9bb908-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-26slj\" (UID: \"fd940fdb-3b83-421e-bdaa-5a238a9bb908\") " pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.650791 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd940fdb-3b83-421e-bdaa-5a238a9bb908-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-26slj\" (UID: \"fd940fdb-3b83-421e-bdaa-5a238a9bb908\") " pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.652888 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd940fdb-3b83-421e-bdaa-5a238a9bb908-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-26slj\" (UID: \"fd940fdb-3b83-421e-bdaa-5a238a9bb908\") " pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.659000 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd940fdb-3b83-421e-bdaa-5a238a9bb908-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-26slj\" (UID: \"fd940fdb-3b83-421e-bdaa-5a238a9bb908\") " pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.667478 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdkk7\" (UniqueName: \"kubernetes.io/projected/fd940fdb-3b83-421e-bdaa-5a238a9bb908-kube-api-access-kdkk7\") pod \"marketplace-operator-79b997595-26slj\" (UID: \"fd940fdb-3b83-421e-bdaa-5a238a9bb908\") " pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.701143 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.735011 4931 generic.go:334] "Generic (PLEG): container finished" podID="b94cc359-b91d-4058-9b99-daee5cb58497" containerID="4c66077881cec64331ed6c4fa597771bef3329e8c8eb6639da57807e9f0b3b4c" exitCode=0 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.735078 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" event={"ID":"b94cc359-b91d-4058-9b99-daee5cb58497","Type":"ContainerDied","Data":"4c66077881cec64331ed6c4fa597771bef3329e8c8eb6639da57807e9f0b3b4c"} Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.735120 4931 scope.go:117] "RemoveContainer" containerID="2214ded12636a10761e8ebbc7d9272c977396c766915fdceeaefef57bd936a80" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.739108 4931 generic.go:334] "Generic (PLEG): container finished" podID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerID="571cc45d42c9095da3bfe4102898f92ca604405d362d0bda47c8877f31479363" exitCode=0 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.739145 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfs9t" event={"ID":"471d6f3b-ab9a-414a-b14f-d719d2d5e96a","Type":"ContainerDied","Data":"571cc45d42c9095da3bfe4102898f92ca604405d362d0bda47c8877f31479363"} Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.743534 4931 generic.go:334] "Generic (PLEG): container finished" podID="a2e61b47-71de-4eff-a485-7ace762bad74" containerID="8ce5632c2824db6fb1e2484b806fa5bb36ec7e90427d98302b1a074fc20e1e70" exitCode=0 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.743580 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpx9n" event={"ID":"a2e61b47-71de-4eff-a485-7ace762bad74","Type":"ContainerDied","Data":"8ce5632c2824db6fb1e2484b806fa5bb36ec7e90427d98302b1a074fc20e1e70"} Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.745757 4931 generic.go:334] "Generic (PLEG): container finished" podID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerID="234befd7655a7a7aaed306a0b21c774079633c1f2c7fe2e9df5b6eb04959ad8b" exitCode=0 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.745809 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmxsb" event={"ID":"2bff2dda-54db-4a36-a7a5-af34cf3367dc","Type":"ContainerDied","Data":"234befd7655a7a7aaed306a0b21c774079633c1f2c7fe2e9df5b6eb04959ad8b"} Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.752361 4931 generic.go:334] "Generic (PLEG): container finished" podID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerID="10ff3262489b5910235604a2ea9b45d585275d72194510c26fe8a8e3a3974227" exitCode=0 Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.752400 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h5z9" event={"ID":"6fd4e055-c0fd-4afb-873d-9920d5765466","Type":"ContainerDied","Data":"10ff3262489b5910235604a2ea9b45d585275d72194510c26fe8a8e3a3974227"} Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.865324 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.954543 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-catalog-content\") pod \"a2e61b47-71de-4eff-a485-7ace762bad74\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.954584 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4mrx\" (UniqueName: \"kubernetes.io/projected/a2e61b47-71de-4eff-a485-7ace762bad74-kube-api-access-h4mrx\") pod \"a2e61b47-71de-4eff-a485-7ace762bad74\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.954614 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-utilities\") pod \"a2e61b47-71de-4eff-a485-7ace762bad74\" (UID: \"a2e61b47-71de-4eff-a485-7ace762bad74\") " Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.955559 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-utilities" (OuterVolumeSpecName: "utilities") pod "a2e61b47-71de-4eff-a485-7ace762bad74" (UID: "a2e61b47-71de-4eff-a485-7ace762bad74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.959886 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e61b47-71de-4eff-a485-7ace762bad74-kube-api-access-h4mrx" (OuterVolumeSpecName: "kube-api-access-h4mrx") pod "a2e61b47-71de-4eff-a485-7ace762bad74" (UID: "a2e61b47-71de-4eff-a485-7ace762bad74"). InnerVolumeSpecName "kube-api-access-h4mrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:18 crc kubenswrapper[4931]: I0131 04:30:18.985172 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.056432 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4mrx\" (UniqueName: \"kubernetes.io/projected/a2e61b47-71de-4eff-a485-7ace762bad74-kube-api-access-h4mrx\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.056463 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.073037 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.075867 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.104256 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2e61b47-71de-4eff-a485-7ace762bad74" (UID: "a2e61b47-71de-4eff-a485-7ace762bad74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157122 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2ng8\" (UniqueName: \"kubernetes.io/projected/b94cc359-b91d-4058-9b99-daee5cb58497-kube-api-access-c2ng8\") pod \"b94cc359-b91d-4058-9b99-daee5cb58497\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157252 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-catalog-content\") pod \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157283 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-utilities\") pod \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157334 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-242mv\" (UniqueName: \"kubernetes.io/projected/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-kube-api-access-242mv\") pod \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157373 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-catalog-content\") pod \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157494 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-trusted-ca\") pod \"b94cc359-b91d-4058-9b99-daee5cb58497\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157527 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-operator-metrics\") pod \"b94cc359-b91d-4058-9b99-daee5cb58497\" (UID: \"b94cc359-b91d-4058-9b99-daee5cb58497\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157590 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-utilities\") pod \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\" (UID: \"471d6f3b-ab9a-414a-b14f-d719d2d5e96a\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157622 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p6sh\" (UniqueName: \"kubernetes.io/projected/2bff2dda-54db-4a36-a7a5-af34cf3367dc-kube-api-access-6p6sh\") pod \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\" (UID: \"2bff2dda-54db-4a36-a7a5-af34cf3367dc\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.157959 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e61b47-71de-4eff-a485-7ace762bad74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.158386 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b94cc359-b91d-4058-9b99-daee5cb58497" (UID: "b94cc359-b91d-4058-9b99-daee5cb58497"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.158711 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-utilities" (OuterVolumeSpecName: "utilities") pod "2bff2dda-54db-4a36-a7a5-af34cf3367dc" (UID: "2bff2dda-54db-4a36-a7a5-af34cf3367dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.158767 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-utilities" (OuterVolumeSpecName: "utilities") pod "471d6f3b-ab9a-414a-b14f-d719d2d5e96a" (UID: "471d6f3b-ab9a-414a-b14f-d719d2d5e96a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.161069 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94cc359-b91d-4058-9b99-daee5cb58497-kube-api-access-c2ng8" (OuterVolumeSpecName: "kube-api-access-c2ng8") pod "b94cc359-b91d-4058-9b99-daee5cb58497" (UID: "b94cc359-b91d-4058-9b99-daee5cb58497"). InnerVolumeSpecName "kube-api-access-c2ng8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.161871 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b94cc359-b91d-4058-9b99-daee5cb58497" (UID: "b94cc359-b91d-4058-9b99-daee5cb58497"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.161868 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bff2dda-54db-4a36-a7a5-af34cf3367dc-kube-api-access-6p6sh" (OuterVolumeSpecName: "kube-api-access-6p6sh") pod "2bff2dda-54db-4a36-a7a5-af34cf3367dc" (UID: "2bff2dda-54db-4a36-a7a5-af34cf3367dc"). InnerVolumeSpecName "kube-api-access-6p6sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.163476 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-kube-api-access-242mv" (OuterVolumeSpecName: "kube-api-access-242mv") pod "471d6f3b-ab9a-414a-b14f-d719d2d5e96a" (UID: "471d6f3b-ab9a-414a-b14f-d719d2d5e96a"). InnerVolumeSpecName "kube-api-access-242mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.181637 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bff2dda-54db-4a36-a7a5-af34cf3367dc" (UID: "2bff2dda-54db-4a36-a7a5-af34cf3367dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.240325 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-26slj"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.246090 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "471d6f3b-ab9a-414a-b14f-d719d2d5e96a" (UID: "471d6f3b-ab9a-414a-b14f-d719d2d5e96a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.260363 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2ng8\" (UniqueName: \"kubernetes.io/projected/b94cc359-b91d-4058-9b99-daee5cb58497-kube-api-access-c2ng8\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.260390 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.260400 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.260415 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-242mv\" (UniqueName: \"kubernetes.io/projected/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-kube-api-access-242mv\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.260431 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bff2dda-54db-4a36-a7a5-af34cf3367dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.260444 4931 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.260455 4931 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b94cc359-b91d-4058-9b99-daee5cb58497-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.260464 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471d6f3b-ab9a-414a-b14f-d719d2d5e96a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.260473 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p6sh\" (UniqueName: \"kubernetes.io/projected/2bff2dda-54db-4a36-a7a5-af34cf3367dc-kube-api-access-6p6sh\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.409745 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.564114 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmhjk\" (UniqueName: \"kubernetes.io/projected/6fd4e055-c0fd-4afb-873d-9920d5765466-kube-api-access-tmhjk\") pod \"6fd4e055-c0fd-4afb-873d-9920d5765466\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.564178 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-catalog-content\") pod \"6fd4e055-c0fd-4afb-873d-9920d5765466\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.564230 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-utilities\") pod \"6fd4e055-c0fd-4afb-873d-9920d5765466\" (UID: \"6fd4e055-c0fd-4afb-873d-9920d5765466\") " Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.565075 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-utilities" (OuterVolumeSpecName: "utilities") pod "6fd4e055-c0fd-4afb-873d-9920d5765466" (UID: "6fd4e055-c0fd-4afb-873d-9920d5765466"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.567292 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd4e055-c0fd-4afb-873d-9920d5765466-kube-api-access-tmhjk" (OuterVolumeSpecName: "kube-api-access-tmhjk") pod "6fd4e055-c0fd-4afb-873d-9920d5765466" (UID: "6fd4e055-c0fd-4afb-873d-9920d5765466"). InnerVolumeSpecName "kube-api-access-tmhjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.632344 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fd4e055-c0fd-4afb-873d-9920d5765466" (UID: "6fd4e055-c0fd-4afb-873d-9920d5765466"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.665363 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.665410 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmhjk\" (UniqueName: \"kubernetes.io/projected/6fd4e055-c0fd-4afb-873d-9920d5765466-kube-api-access-tmhjk\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.665427 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fd4e055-c0fd-4afb-873d-9920d5765466-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.758845 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfs9t" event={"ID":"471d6f3b-ab9a-414a-b14f-d719d2d5e96a","Type":"ContainerDied","Data":"726179c5148e24fe50044539198d323c71ceeac8f289d9232013fe9684098761"} Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.758899 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfs9t" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.758911 4931 scope.go:117] "RemoveContainer" containerID="571cc45d42c9095da3bfe4102898f92ca604405d362d0bda47c8877f31479363" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.761606 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpx9n" event={"ID":"a2e61b47-71de-4eff-a485-7ace762bad74","Type":"ContainerDied","Data":"e2eca579d1a1f74da3abe7257997080e2d2b342e7b0910eca90e5e5fd8902f00"} Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.761632 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wpx9n" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.763581 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmxsb" event={"ID":"2bff2dda-54db-4a36-a7a5-af34cf3367dc","Type":"ContainerDied","Data":"aa0542ad661fdb289de312b857bc92d01f2a34ce56eec9f0d177e436803ced8c"} Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.763675 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmxsb" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.766534 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-26slj" event={"ID":"fd940fdb-3b83-421e-bdaa-5a238a9bb908","Type":"ContainerStarted","Data":"b8e39b7d6be21ba63a1a8623da16731b15ffca82a82429ac4064fe2833e0429d"} Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.766582 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-26slj" event={"ID":"fd940fdb-3b83-421e-bdaa-5a238a9bb908","Type":"ContainerStarted","Data":"2e986bbf4b681bd6074deb48715fb32d0bdadc7f0e9b8310b9455b94598fc1a9"} Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.766603 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.769767 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h5z9" event={"ID":"6fd4e055-c0fd-4afb-873d-9920d5765466","Type":"ContainerDied","Data":"6e1165c58b13bf8c2ae9ae9ac1b2c9a2f4e5b519723ee1ef4a67492b4112ec6d"} Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.769900 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2h5z9" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.771412 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" event={"ID":"b94cc359-b91d-4058-9b99-daee5cb58497","Type":"ContainerDied","Data":"9c97d735014b82bc8456fc73ff30cf9bf0e79a9c62a6cd7d47740f18c52ad2c1"} Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.771482 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-drqrf" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.776484 4931 scope.go:117] "RemoveContainer" containerID="57f65a19f7fc06e250f10b159ddd7a64afbdbf2fdb25cb00ac56f9e018b8b688" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.790121 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-26slj" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.793590 4931 scope.go:117] "RemoveContainer" containerID="a0969411cfd7c069adcce155c547546b270abbedd318d1ed26158d9ec11efeff" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.819613 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-26slj" podStartSLOduration=1.8195006120000001 podStartE2EDuration="1.819500612s" podCreationTimestamp="2026-01-31 04:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:30:19.812047271 +0000 UTC m=+378.621276145" watchObservedRunningTime="2026-01-31 04:30:19.819500612 +0000 UTC m=+378.628729486" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.827316 4931 scope.go:117] "RemoveContainer" containerID="8ce5632c2824db6fb1e2484b806fa5bb36ec7e90427d98302b1a074fc20e1e70" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.837477 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfs9t"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.848797 4931 scope.go:117] "RemoveContainer" containerID="08edec43a83436aa01234bb4d3fad4226e5a88b3088990919a732207d29e619c" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.852411 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sfs9t"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.859637 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wpx9n"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.862705 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wpx9n"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.883419 4931 scope.go:117] "RemoveContainer" containerID="96cfe31accacb49bb2bef9df2f41ffa06c20b4944d90d3aa185cf3da94cfea3d" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.902268 4931 scope.go:117] "RemoveContainer" containerID="234befd7655a7a7aaed306a0b21c774079633c1f2c7fe2e9df5b6eb04959ad8b" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.910133 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" path="/var/lib/kubelet/pods/471d6f3b-ab9a-414a-b14f-d719d2d5e96a/volumes" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.910899 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" path="/var/lib/kubelet/pods/a2e61b47-71de-4eff-a485-7ace762bad74/volumes" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.911605 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmxsb"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.911632 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmxsb"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.911648 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drqrf"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.912536 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drqrf"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.915680 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2h5z9"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.919713 4931 scope.go:117] "RemoveContainer" containerID="38e6badf128515e384543841e1ed6c7d7de678e4ac47f626f4f213d6eea0a706" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.920541 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2h5z9"] Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.933896 4931 scope.go:117] "RemoveContainer" containerID="0a1a9ed1ffeb3136c8fa43e85127a5d37922db12fd616281353f3fd7a1849f98" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.951593 4931 scope.go:117] "RemoveContainer" containerID="10ff3262489b5910235604a2ea9b45d585275d72194510c26fe8a8e3a3974227" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.971072 4931 scope.go:117] "RemoveContainer" containerID="602c87e45487ddb3d76e17726d55932e42760eb1791fff92f1c7367f79383a4b" Jan 31 04:30:19 crc kubenswrapper[4931]: I0131 04:30:19.992689 4931 scope.go:117] "RemoveContainer" containerID="d6ae0a71c09f272c62d8b893571af2c4bdfe37878fa5b6ab0016cc2ae940b980" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.005886 4931 scope.go:117] "RemoveContainer" containerID="4c66077881cec64331ed6c4fa597771bef3329e8c8eb6639da57807e9f0b3b4c" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938037 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fx9m6"] Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938507 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerName="extract-utilities" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938520 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerName="extract-utilities" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938533 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" containerName="marketplace-operator" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938541 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" containerName="marketplace-operator" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938553 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" containerName="marketplace-operator" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938562 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" containerName="marketplace-operator" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938574 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938581 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938592 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerName="extract-utilities" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938600 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerName="extract-utilities" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938610 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerName="extract-content" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938617 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerName="extract-content" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938625 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938631 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938639 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerName="extract-content" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938645 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerName="extract-content" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938654 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerName="extract-content" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938659 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerName="extract-content" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938668 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerName="extract-utilities" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938674 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerName="extract-utilities" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938703 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938710 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938734 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" containerName="extract-utilities" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938740 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" containerName="extract-utilities" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938748 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938754 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: E0131 04:30:20.938763 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" containerName="extract-content" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938770 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" containerName="extract-content" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938847 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" containerName="marketplace-operator" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938860 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938866 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e61b47-71de-4eff-a485-7ace762bad74" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938876 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="471d6f3b-ab9a-414a-b14f-d719d2d5e96a" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.938884 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" containerName="registry-server" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.939039 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" containerName="marketplace-operator" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.939522 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.944410 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 04:30:20 crc kubenswrapper[4931]: I0131 04:30:20.952337 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fx9m6"] Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.082900 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b4ef96-378e-443d-9eeb-e75e6f181af6-catalog-content\") pod \"redhat-operators-fx9m6\" (UID: \"61b4ef96-378e-443d-9eeb-e75e6f181af6\") " pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.082952 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf8cb\" (UniqueName: \"kubernetes.io/projected/61b4ef96-378e-443d-9eeb-e75e6f181af6-kube-api-access-cf8cb\") pod \"redhat-operators-fx9m6\" (UID: \"61b4ef96-378e-443d-9eeb-e75e6f181af6\") " pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.083262 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b4ef96-378e-443d-9eeb-e75e6f181af6-utilities\") pod \"redhat-operators-fx9m6\" (UID: \"61b4ef96-378e-443d-9eeb-e75e6f181af6\") " pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.101641 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-d798x"] Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.102408 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.118592 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-d798x"] Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.133543 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.133597 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185051 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtbx5\" (UniqueName: \"kubernetes.io/projected/62f2e2e0-8129-4118-ac28-37b30476c0d6-kube-api-access-rtbx5\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185096 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62f2e2e0-8129-4118-ac28-37b30476c0d6-trusted-ca\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185119 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/62f2e2e0-8129-4118-ac28-37b30476c0d6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185272 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62f2e2e0-8129-4118-ac28-37b30476c0d6-bound-sa-token\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185322 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/62f2e2e0-8129-4118-ac28-37b30476c0d6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185364 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b4ef96-378e-443d-9eeb-e75e6f181af6-utilities\") pod \"redhat-operators-fx9m6\" (UID: \"61b4ef96-378e-443d-9eeb-e75e6f181af6\") " pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185402 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185426 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/62f2e2e0-8129-4118-ac28-37b30476c0d6-registry-tls\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185456 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/62f2e2e0-8129-4118-ac28-37b30476c0d6-registry-certificates\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185472 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b4ef96-378e-443d-9eeb-e75e6f181af6-catalog-content\") pod \"redhat-operators-fx9m6\" (UID: \"61b4ef96-378e-443d-9eeb-e75e6f181af6\") " pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185490 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf8cb\" (UniqueName: \"kubernetes.io/projected/61b4ef96-378e-443d-9eeb-e75e6f181af6-kube-api-access-cf8cb\") pod \"redhat-operators-fx9m6\" (UID: \"61b4ef96-378e-443d-9eeb-e75e6f181af6\") " pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185788 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b4ef96-378e-443d-9eeb-e75e6f181af6-utilities\") pod \"redhat-operators-fx9m6\" (UID: \"61b4ef96-378e-443d-9eeb-e75e6f181af6\") " pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.185857 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b4ef96-378e-443d-9eeb-e75e6f181af6-catalog-content\") pod \"redhat-operators-fx9m6\" (UID: \"61b4ef96-378e-443d-9eeb-e75e6f181af6\") " pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.204244 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.204393 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf8cb\" (UniqueName: \"kubernetes.io/projected/61b4ef96-378e-443d-9eeb-e75e6f181af6-kube-api-access-cf8cb\") pod \"redhat-operators-fx9m6\" (UID: \"61b4ef96-378e-443d-9eeb-e75e6f181af6\") " pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.267019 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.286325 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62f2e2e0-8129-4118-ac28-37b30476c0d6-bound-sa-token\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.286371 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/62f2e2e0-8129-4118-ac28-37b30476c0d6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.286482 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/62f2e2e0-8129-4118-ac28-37b30476c0d6-registry-tls\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.286534 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/62f2e2e0-8129-4118-ac28-37b30476c0d6-registry-certificates\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.286596 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtbx5\" (UniqueName: \"kubernetes.io/projected/62f2e2e0-8129-4118-ac28-37b30476c0d6-kube-api-access-rtbx5\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.286628 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62f2e2e0-8129-4118-ac28-37b30476c0d6-trusted-ca\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.287001 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/62f2e2e0-8129-4118-ac28-37b30476c0d6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.287027 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/62f2e2e0-8129-4118-ac28-37b30476c0d6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.287913 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/62f2e2e0-8129-4118-ac28-37b30476c0d6-registry-certificates\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.288503 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62f2e2e0-8129-4118-ac28-37b30476c0d6-trusted-ca\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.291219 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/62f2e2e0-8129-4118-ac28-37b30476c0d6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.295360 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/62f2e2e0-8129-4118-ac28-37b30476c0d6-registry-tls\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.303408 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtbx5\" (UniqueName: \"kubernetes.io/projected/62f2e2e0-8129-4118-ac28-37b30476c0d6-kube-api-access-rtbx5\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.303563 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62f2e2e0-8129-4118-ac28-37b30476c0d6-bound-sa-token\") pod \"image-registry-66df7c8f76-d798x\" (UID: \"62f2e2e0-8129-4118-ac28-37b30476c0d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.416397 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.644798 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fx9m6"] Jan 31 04:30:21 crc kubenswrapper[4931]: W0131 04:30:21.648956 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61b4ef96_378e_443d_9eeb_e75e6f181af6.slice/crio-f6cdce222c62a16f144cf65924fbb1523e42aaf2aec0d94804a128a719c129a9 WatchSource:0}: Error finding container f6cdce222c62a16f144cf65924fbb1523e42aaf2aec0d94804a128a719c129a9: Status 404 returned error can't find the container with id f6cdce222c62a16f144cf65924fbb1523e42aaf2aec0d94804a128a719c129a9 Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.793242 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9m6" event={"ID":"61b4ef96-378e-443d-9eeb-e75e6f181af6","Type":"ContainerStarted","Data":"f6cdce222c62a16f144cf65924fbb1523e42aaf2aec0d94804a128a719c129a9"} Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.797070 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-d798x"] Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.910041 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bff2dda-54db-4a36-a7a5-af34cf3367dc" path="/var/lib/kubelet/pods/2bff2dda-54db-4a36-a7a5-af34cf3367dc/volumes" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.910893 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd4e055-c0fd-4afb-873d-9920d5765466" path="/var/lib/kubelet/pods/6fd4e055-c0fd-4afb-873d-9920d5765466/volumes" Jan 31 04:30:21 crc kubenswrapper[4931]: I0131 04:30:21.911841 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b94cc359-b91d-4058-9b99-daee5cb58497" path="/var/lib/kubelet/pods/b94cc359-b91d-4058-9b99-daee5cb58497/volumes" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.334659 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-95vhg"] Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.335687 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.338178 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.345584 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95vhg"] Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.509361 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7fca46a-8b1a-4655-8f36-777e9779c57a-utilities\") pod \"certified-operators-95vhg\" (UID: \"c7fca46a-8b1a-4655-8f36-777e9779c57a\") " pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.509422 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhxqv\" (UniqueName: \"kubernetes.io/projected/c7fca46a-8b1a-4655-8f36-777e9779c57a-kube-api-access-nhxqv\") pod \"certified-operators-95vhg\" (UID: \"c7fca46a-8b1a-4655-8f36-777e9779c57a\") " pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.509453 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7fca46a-8b1a-4655-8f36-777e9779c57a-catalog-content\") pod \"certified-operators-95vhg\" (UID: \"c7fca46a-8b1a-4655-8f36-777e9779c57a\") " pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.610807 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7fca46a-8b1a-4655-8f36-777e9779c57a-utilities\") pod \"certified-operators-95vhg\" (UID: \"c7fca46a-8b1a-4655-8f36-777e9779c57a\") " pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.610856 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhxqv\" (UniqueName: \"kubernetes.io/projected/c7fca46a-8b1a-4655-8f36-777e9779c57a-kube-api-access-nhxqv\") pod \"certified-operators-95vhg\" (UID: \"c7fca46a-8b1a-4655-8f36-777e9779c57a\") " pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.610879 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7fca46a-8b1a-4655-8f36-777e9779c57a-catalog-content\") pod \"certified-operators-95vhg\" (UID: \"c7fca46a-8b1a-4655-8f36-777e9779c57a\") " pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.611376 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7fca46a-8b1a-4655-8f36-777e9779c57a-utilities\") pod \"certified-operators-95vhg\" (UID: \"c7fca46a-8b1a-4655-8f36-777e9779c57a\") " pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.611433 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7fca46a-8b1a-4655-8f36-777e9779c57a-catalog-content\") pod \"certified-operators-95vhg\" (UID: \"c7fca46a-8b1a-4655-8f36-777e9779c57a\") " pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.630966 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhxqv\" (UniqueName: \"kubernetes.io/projected/c7fca46a-8b1a-4655-8f36-777e9779c57a-kube-api-access-nhxqv\") pod \"certified-operators-95vhg\" (UID: \"c7fca46a-8b1a-4655-8f36-777e9779c57a\") " pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.710523 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.805394 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-d798x" event={"ID":"62f2e2e0-8129-4118-ac28-37b30476c0d6","Type":"ContainerStarted","Data":"242058a4a8e171f917f7a6eae2f9fbc11916ce68879c0387a5f9ee8685480c12"} Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.805433 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-d798x" event={"ID":"62f2e2e0-8129-4118-ac28-37b30476c0d6","Type":"ContainerStarted","Data":"fef51cde496685fbdad12d0cb122ffbbf5eedbed910a695fabaf35fa43042b24"} Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.805876 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.809347 4931 generic.go:334] "Generic (PLEG): container finished" podID="61b4ef96-378e-443d-9eeb-e75e6f181af6" containerID="e9d6f07068e731bf738810a7eac482ff8d8dc6a397e4937e78c7ac7e1a5b1111" exitCode=0 Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.809390 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9m6" event={"ID":"61b4ef96-378e-443d-9eeb-e75e6f181af6","Type":"ContainerDied","Data":"e9d6f07068e731bf738810a7eac482ff8d8dc6a397e4937e78c7ac7e1a5b1111"} Jan 31 04:30:22 crc kubenswrapper[4931]: I0131 04:30:22.829645 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-d798x" podStartSLOduration=1.829610879 podStartE2EDuration="1.829610879s" podCreationTimestamp="2026-01-31 04:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:30:22.822347804 +0000 UTC m=+381.631576678" watchObservedRunningTime="2026-01-31 04:30:22.829610879 +0000 UTC m=+381.638839753" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.125992 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95vhg"] Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.334998 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qclkl"] Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.336105 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.339508 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.347141 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qclkl"] Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.421595 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f974c01-9474-4fcd-a478-d9d56a32995b-catalog-content\") pod \"community-operators-qclkl\" (UID: \"3f974c01-9474-4fcd-a478-d9d56a32995b\") " pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.421682 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f974c01-9474-4fcd-a478-d9d56a32995b-utilities\") pod \"community-operators-qclkl\" (UID: \"3f974c01-9474-4fcd-a478-d9d56a32995b\") " pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.421837 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p95h\" (UniqueName: \"kubernetes.io/projected/3f974c01-9474-4fcd-a478-d9d56a32995b-kube-api-access-4p95h\") pod \"community-operators-qclkl\" (UID: \"3f974c01-9474-4fcd-a478-d9d56a32995b\") " pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.522913 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f974c01-9474-4fcd-a478-d9d56a32995b-catalog-content\") pod \"community-operators-qclkl\" (UID: \"3f974c01-9474-4fcd-a478-d9d56a32995b\") " pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.522987 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f974c01-9474-4fcd-a478-d9d56a32995b-utilities\") pod \"community-operators-qclkl\" (UID: \"3f974c01-9474-4fcd-a478-d9d56a32995b\") " pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.523024 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p95h\" (UniqueName: \"kubernetes.io/projected/3f974c01-9474-4fcd-a478-d9d56a32995b-kube-api-access-4p95h\") pod \"community-operators-qclkl\" (UID: \"3f974c01-9474-4fcd-a478-d9d56a32995b\") " pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.523564 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f974c01-9474-4fcd-a478-d9d56a32995b-catalog-content\") pod \"community-operators-qclkl\" (UID: \"3f974c01-9474-4fcd-a478-d9d56a32995b\") " pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.523678 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f974c01-9474-4fcd-a478-d9d56a32995b-utilities\") pod \"community-operators-qclkl\" (UID: \"3f974c01-9474-4fcd-a478-d9d56a32995b\") " pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.540678 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p95h\" (UniqueName: \"kubernetes.io/projected/3f974c01-9474-4fcd-a478-d9d56a32995b-kube-api-access-4p95h\") pod \"community-operators-qclkl\" (UID: \"3f974c01-9474-4fcd-a478-d9d56a32995b\") " pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.658610 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.816348 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7fca46a-8b1a-4655-8f36-777e9779c57a" containerID="c94048801fd10e9c9f0c0568dd487ffbddce10b66a26517ef30029aa940ed962" exitCode=0 Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.816412 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95vhg" event={"ID":"c7fca46a-8b1a-4655-8f36-777e9779c57a","Type":"ContainerDied","Data":"c94048801fd10e9c9f0c0568dd487ffbddce10b66a26517ef30029aa940ed962"} Jan 31 04:30:23 crc kubenswrapper[4931]: I0131 04:30:23.816463 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95vhg" event={"ID":"c7fca46a-8b1a-4655-8f36-777e9779c57a","Type":"ContainerStarted","Data":"d1aed1fa5079ddc7bc07816e6b0f38eda924fca7b2ac3631f9324652c44b2a34"} Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.336655 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qclkl"] Jan 31 04:30:24 crc kubenswrapper[4931]: W0131 04:30:24.344583 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f974c01_9474_4fcd_a478_d9d56a32995b.slice/crio-c06d81160af6f99f2376fe3aa0ec76c95bc4c6471586807ecd409bbd333809a6 WatchSource:0}: Error finding container c06d81160af6f99f2376fe3aa0ec76c95bc4c6471586807ecd409bbd333809a6: Status 404 returned error can't find the container with id c06d81160af6f99f2376fe3aa0ec76c95bc4c6471586807ecd409bbd333809a6 Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.743991 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-htqgw"] Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.749144 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.755083 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.759983 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-htqgw"] Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.824229 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9m6" event={"ID":"61b4ef96-378e-443d-9eeb-e75e6f181af6","Type":"ContainerStarted","Data":"129834e1afa089e9eb39da1fe34dce6eb3fbdb93cb04cea41d53b3c7db0a98af"} Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.828086 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qclkl" event={"ID":"3f974c01-9474-4fcd-a478-d9d56a32995b","Type":"ContainerStarted","Data":"40c424c8055bf80118e164e8682e92ad6be62336960437ec672cb3a005230dad"} Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.828165 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qclkl" event={"ID":"3f974c01-9474-4fcd-a478-d9d56a32995b","Type":"ContainerStarted","Data":"c06d81160af6f99f2376fe3aa0ec76c95bc4c6471586807ecd409bbd333809a6"} Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.843288 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3d371d-d98f-4f82-a823-b74e23f9ca19-catalog-content\") pod \"redhat-marketplace-htqgw\" (UID: \"7a3d371d-d98f-4f82-a823-b74e23f9ca19\") " pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.843368 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fdcd\" (UniqueName: \"kubernetes.io/projected/7a3d371d-d98f-4f82-a823-b74e23f9ca19-kube-api-access-2fdcd\") pod \"redhat-marketplace-htqgw\" (UID: \"7a3d371d-d98f-4f82-a823-b74e23f9ca19\") " pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.843404 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3d371d-d98f-4f82-a823-b74e23f9ca19-utilities\") pod \"redhat-marketplace-htqgw\" (UID: \"7a3d371d-d98f-4f82-a823-b74e23f9ca19\") " pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.944385 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3d371d-d98f-4f82-a823-b74e23f9ca19-catalog-content\") pod \"redhat-marketplace-htqgw\" (UID: \"7a3d371d-d98f-4f82-a823-b74e23f9ca19\") " pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.944438 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fdcd\" (UniqueName: \"kubernetes.io/projected/7a3d371d-d98f-4f82-a823-b74e23f9ca19-kube-api-access-2fdcd\") pod \"redhat-marketplace-htqgw\" (UID: \"7a3d371d-d98f-4f82-a823-b74e23f9ca19\") " pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.944482 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3d371d-d98f-4f82-a823-b74e23f9ca19-utilities\") pod \"redhat-marketplace-htqgw\" (UID: \"7a3d371d-d98f-4f82-a823-b74e23f9ca19\") " pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.945008 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3d371d-d98f-4f82-a823-b74e23f9ca19-utilities\") pod \"redhat-marketplace-htqgw\" (UID: \"7a3d371d-d98f-4f82-a823-b74e23f9ca19\") " pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.945226 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3d371d-d98f-4f82-a823-b74e23f9ca19-catalog-content\") pod \"redhat-marketplace-htqgw\" (UID: \"7a3d371d-d98f-4f82-a823-b74e23f9ca19\") " pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:24 crc kubenswrapper[4931]: I0131 04:30:24.972271 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fdcd\" (UniqueName: \"kubernetes.io/projected/7a3d371d-d98f-4f82-a823-b74e23f9ca19-kube-api-access-2fdcd\") pod \"redhat-marketplace-htqgw\" (UID: \"7a3d371d-d98f-4f82-a823-b74e23f9ca19\") " pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.081326 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.475777 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-htqgw"] Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.834392 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95vhg" event={"ID":"c7fca46a-8b1a-4655-8f36-777e9779c57a","Type":"ContainerStarted","Data":"89c7f36b9fe219a3a1177270254a897232fa20e91b525cae586414e875a3f99f"} Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.836664 4931 generic.go:334] "Generic (PLEG): container finished" podID="3f974c01-9474-4fcd-a478-d9d56a32995b" containerID="40c424c8055bf80118e164e8682e92ad6be62336960437ec672cb3a005230dad" exitCode=0 Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.836711 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qclkl" event={"ID":"3f974c01-9474-4fcd-a478-d9d56a32995b","Type":"ContainerDied","Data":"40c424c8055bf80118e164e8682e92ad6be62336960437ec672cb3a005230dad"} Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.838967 4931 generic.go:334] "Generic (PLEG): container finished" podID="7a3d371d-d98f-4f82-a823-b74e23f9ca19" containerID="0b493227574e484d69bdd2896bd56fff25e1c0df759b65b2072f540baa04cc38" exitCode=0 Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.839011 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htqgw" event={"ID":"7a3d371d-d98f-4f82-a823-b74e23f9ca19","Type":"ContainerDied","Data":"0b493227574e484d69bdd2896bd56fff25e1c0df759b65b2072f540baa04cc38"} Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.839069 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htqgw" event={"ID":"7a3d371d-d98f-4f82-a823-b74e23f9ca19","Type":"ContainerStarted","Data":"8c96448c043a272b9492fb2eecbfba727285f50a0ddd029bd9b2b7c09d8a08bf"} Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.840975 4931 generic.go:334] "Generic (PLEG): container finished" podID="61b4ef96-378e-443d-9eeb-e75e6f181af6" containerID="129834e1afa089e9eb39da1fe34dce6eb3fbdb93cb04cea41d53b3c7db0a98af" exitCode=0 Jan 31 04:30:25 crc kubenswrapper[4931]: I0131 04:30:25.840999 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9m6" event={"ID":"61b4ef96-378e-443d-9eeb-e75e6f181af6","Type":"ContainerDied","Data":"129834e1afa089e9eb39da1fe34dce6eb3fbdb93cb04cea41d53b3c7db0a98af"} Jan 31 04:30:26 crc kubenswrapper[4931]: E0131 04:30:26.128671 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7fca46a_8b1a_4655_8f36_777e9779c57a.slice/crio-conmon-89c7f36b9fe219a3a1177270254a897232fa20e91b525cae586414e875a3f99f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7fca46a_8b1a_4655_8f36_777e9779c57a.slice/crio-89c7f36b9fe219a3a1177270254a897232fa20e91b525cae586414e875a3f99f.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:30:26 crc kubenswrapper[4931]: I0131 04:30:26.848980 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7fca46a-8b1a-4655-8f36-777e9779c57a" containerID="89c7f36b9fe219a3a1177270254a897232fa20e91b525cae586414e875a3f99f" exitCode=0 Jan 31 04:30:26 crc kubenswrapper[4931]: I0131 04:30:26.849045 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95vhg" event={"ID":"c7fca46a-8b1a-4655-8f36-777e9779c57a","Type":"ContainerDied","Data":"89c7f36b9fe219a3a1177270254a897232fa20e91b525cae586414e875a3f99f"} Jan 31 04:30:28 crc kubenswrapper[4931]: I0131 04:30:28.868202 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9m6" event={"ID":"61b4ef96-378e-443d-9eeb-e75e6f181af6","Type":"ContainerStarted","Data":"99dd869a67be526341d5d4d6008abcb596d614fad41b40f29a870ca47e7cd11e"} Jan 31 04:30:28 crc kubenswrapper[4931]: I0131 04:30:28.891006 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fx9m6" podStartSLOduration=4.135138649 podStartE2EDuration="8.890989888s" podCreationTimestamp="2026-01-31 04:30:20 +0000 UTC" firstStartedPulling="2026-01-31 04:30:22.815858472 +0000 UTC m=+381.625087346" lastFinishedPulling="2026-01-31 04:30:27.571709711 +0000 UTC m=+386.380938585" observedRunningTime="2026-01-31 04:30:28.888246023 +0000 UTC m=+387.697474897" watchObservedRunningTime="2026-01-31 04:30:28.890989888 +0000 UTC m=+387.700218752" Jan 31 04:30:29 crc kubenswrapper[4931]: I0131 04:30:29.873916 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qclkl" event={"ID":"3f974c01-9474-4fcd-a478-d9d56a32995b","Type":"ContainerStarted","Data":"62163e19bd69be75370ad41d7498ae7e247ce132530508d6f1f815dc692d592e"} Jan 31 04:30:29 crc kubenswrapper[4931]: I0131 04:30:29.876444 4931 generic.go:334] "Generic (PLEG): container finished" podID="7a3d371d-d98f-4f82-a823-b74e23f9ca19" containerID="7d17e737f08924c0a2217e6eeb47f46b7c1d06a4a1c1cde2b249b28f65734e87" exitCode=0 Jan 31 04:30:29 crc kubenswrapper[4931]: I0131 04:30:29.876534 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htqgw" event={"ID":"7a3d371d-d98f-4f82-a823-b74e23f9ca19","Type":"ContainerDied","Data":"7d17e737f08924c0a2217e6eeb47f46b7c1d06a4a1c1cde2b249b28f65734e87"} Jan 31 04:30:30 crc kubenswrapper[4931]: I0131 04:30:30.883446 4931 generic.go:334] "Generic (PLEG): container finished" podID="3f974c01-9474-4fcd-a478-d9d56a32995b" containerID="62163e19bd69be75370ad41d7498ae7e247ce132530508d6f1f815dc692d592e" exitCode=0 Jan 31 04:30:30 crc kubenswrapper[4931]: I0131 04:30:30.883524 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qclkl" event={"ID":"3f974c01-9474-4fcd-a478-d9d56a32995b","Type":"ContainerDied","Data":"62163e19bd69be75370ad41d7498ae7e247ce132530508d6f1f815dc692d592e"} Jan 31 04:30:30 crc kubenswrapper[4931]: I0131 04:30:30.889647 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95vhg" event={"ID":"c7fca46a-8b1a-4655-8f36-777e9779c57a","Type":"ContainerStarted","Data":"d5373d1d6a48ef41a4077667ba12e8ddb7b4e6b23f51808afcca0bb4224f2999"} Jan 31 04:30:30 crc kubenswrapper[4931]: I0131 04:30:30.939083 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-95vhg" podStartSLOduration=3.524892443 podStartE2EDuration="8.939057288s" podCreationTimestamp="2026-01-31 04:30:22 +0000 UTC" firstStartedPulling="2026-01-31 04:30:23.936034591 +0000 UTC m=+382.745263505" lastFinishedPulling="2026-01-31 04:30:29.350199476 +0000 UTC m=+388.159428350" observedRunningTime="2026-01-31 04:30:30.938337622 +0000 UTC m=+389.747566496" watchObservedRunningTime="2026-01-31 04:30:30.939057288 +0000 UTC m=+389.748286162" Jan 31 04:30:31 crc kubenswrapper[4931]: I0131 04:30:31.267473 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:31 crc kubenswrapper[4931]: I0131 04:30:31.267729 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:32 crc kubenswrapper[4931]: I0131 04:30:32.311241 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fx9m6" podUID="61b4ef96-378e-443d-9eeb-e75e6f181af6" containerName="registry-server" probeResult="failure" output=< Jan 31 04:30:32 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 31 04:30:32 crc kubenswrapper[4931]: > Jan 31 04:30:32 crc kubenswrapper[4931]: I0131 04:30:32.710876 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:32 crc kubenswrapper[4931]: I0131 04:30:32.711111 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:32 crc kubenswrapper[4931]: I0131 04:30:32.785940 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:32 crc kubenswrapper[4931]: I0131 04:30:32.906298 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qclkl" event={"ID":"3f974c01-9474-4fcd-a478-d9d56a32995b","Type":"ContainerStarted","Data":"76afeedf78c42adf3f1bd5ef5c065ac82b9a22f74327fe4a52361bdfccce3cf0"} Jan 31 04:30:32 crc kubenswrapper[4931]: I0131 04:30:32.930860 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qclkl" podStartSLOduration=3.599739731 podStartE2EDuration="9.93084404s" podCreationTimestamp="2026-01-31 04:30:23 +0000 UTC" firstStartedPulling="2026-01-31 04:30:25.83793394 +0000 UTC m=+384.647162814" lastFinishedPulling="2026-01-31 04:30:32.169038199 +0000 UTC m=+390.978267123" observedRunningTime="2026-01-31 04:30:32.928434194 +0000 UTC m=+391.737663068" watchObservedRunningTime="2026-01-31 04:30:32.93084404 +0000 UTC m=+391.740072914" Jan 31 04:30:33 crc kubenswrapper[4931]: I0131 04:30:33.659026 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:33 crc kubenswrapper[4931]: I0131 04:30:33.659403 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:33 crc kubenswrapper[4931]: I0131 04:30:33.916335 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htqgw" event={"ID":"7a3d371d-d98f-4f82-a823-b74e23f9ca19","Type":"ContainerStarted","Data":"f1778a2a64006b4774aff23749cfce2924aaa664aa6d9dbc38b283855a360165"} Jan 31 04:30:34 crc kubenswrapper[4931]: I0131 04:30:34.704468 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qclkl" podUID="3f974c01-9474-4fcd-a478-d9d56a32995b" containerName="registry-server" probeResult="failure" output=< Jan 31 04:30:34 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 31 04:30:34 crc kubenswrapper[4931]: > Jan 31 04:30:34 crc kubenswrapper[4931]: I0131 04:30:34.946604 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-htqgw" podStartSLOduration=3.535924855 podStartE2EDuration="10.946582733s" podCreationTimestamp="2026-01-31 04:30:24 +0000 UTC" firstStartedPulling="2026-01-31 04:30:25.841414368 +0000 UTC m=+384.650643242" lastFinishedPulling="2026-01-31 04:30:33.252072256 +0000 UTC m=+392.061301120" observedRunningTime="2026-01-31 04:30:34.943148814 +0000 UTC m=+393.752377698" watchObservedRunningTime="2026-01-31 04:30:34.946582733 +0000 UTC m=+393.755811607" Jan 31 04:30:35 crc kubenswrapper[4931]: I0131 04:30:35.082182 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:35 crc kubenswrapper[4931]: I0131 04:30:35.082345 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:36 crc kubenswrapper[4931]: I0131 04:30:36.128643 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-htqgw" podUID="7a3d371d-d98f-4f82-a823-b74e23f9ca19" containerName="registry-server" probeResult="failure" output=< Jan 31 04:30:36 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 31 04:30:36 crc kubenswrapper[4931]: > Jan 31 04:30:41 crc kubenswrapper[4931]: I0131 04:30:41.306585 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:41 crc kubenswrapper[4931]: I0131 04:30:41.345650 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fx9m6" Jan 31 04:30:41 crc kubenswrapper[4931]: I0131 04:30:41.421478 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-d798x" Jan 31 04:30:41 crc kubenswrapper[4931]: I0131 04:30:41.506654 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w528s"] Jan 31 04:30:42 crc kubenswrapper[4931]: I0131 04:30:42.770227 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-95vhg" Jan 31 04:30:43 crc kubenswrapper[4931]: I0131 04:30:43.699224 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:43 crc kubenswrapper[4931]: I0131 04:30:43.736033 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qclkl" Jan 31 04:30:45 crc kubenswrapper[4931]: I0131 04:30:45.120820 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:45 crc kubenswrapper[4931]: I0131 04:30:45.158217 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-htqgw" Jan 31 04:30:51 crc kubenswrapper[4931]: I0131 04:30:51.133413 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:30:51 crc kubenswrapper[4931]: I0131 04:30:51.133801 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:31:06 crc kubenswrapper[4931]: I0131 04:31:06.541829 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" podUID="928c64fe-ab31-4942-9bdd-64ab8b8339aa" containerName="registry" containerID="cri-o://3c3ec3e567bbdff59dacfdb58ac9ca1cb2319c97dab9b819f72417a98c7ddde2" gracePeriod=30 Jan 31 04:31:06 crc kubenswrapper[4931]: E0131 04:31:06.637462 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod928c64fe_ab31_4942_9bdd_64ab8b8339aa.slice/crio-3c3ec3e567bbdff59dacfdb58ac9ca1cb2319c97dab9b819f72417a98c7ddde2.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:31:07 crc kubenswrapper[4931]: I0131 04:31:07.115563 4931 generic.go:334] "Generic (PLEG): container finished" podID="928c64fe-ab31-4942-9bdd-64ab8b8339aa" containerID="3c3ec3e567bbdff59dacfdb58ac9ca1cb2319c97dab9b819f72417a98c7ddde2" exitCode=0 Jan 31 04:31:07 crc kubenswrapper[4931]: I0131 04:31:07.115613 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" event={"ID":"928c64fe-ab31-4942-9bdd-64ab8b8339aa","Type":"ContainerDied","Data":"3c3ec3e567bbdff59dacfdb58ac9ca1cb2319c97dab9b819f72417a98c7ddde2"} Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.480147 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.539887 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l477s\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-kube-api-access-l477s\") pod \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.539941 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-trusted-ca\") pod \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.540204 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.540230 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/928c64fe-ab31-4942-9bdd-64ab8b8339aa-installation-pull-secrets\") pod \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.540248 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-bound-sa-token\") pod \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.540290 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-certificates\") pod \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.540343 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/928c64fe-ab31-4942-9bdd-64ab8b8339aa-ca-trust-extracted\") pod \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.540380 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-tls\") pod \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\" (UID: \"928c64fe-ab31-4942-9bdd-64ab8b8339aa\") " Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.542003 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "928c64fe-ab31-4942-9bdd-64ab8b8339aa" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.543073 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "928c64fe-ab31-4942-9bdd-64ab8b8339aa" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.549771 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928c64fe-ab31-4942-9bdd-64ab8b8339aa-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "928c64fe-ab31-4942-9bdd-64ab8b8339aa" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.550500 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "928c64fe-ab31-4942-9bdd-64ab8b8339aa" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.552111 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "928c64fe-ab31-4942-9bdd-64ab8b8339aa" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.559256 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-kube-api-access-l477s" (OuterVolumeSpecName: "kube-api-access-l477s") pod "928c64fe-ab31-4942-9bdd-64ab8b8339aa" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa"). InnerVolumeSpecName "kube-api-access-l477s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.566718 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928c64fe-ab31-4942-9bdd-64ab8b8339aa-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "928c64fe-ab31-4942-9bdd-64ab8b8339aa" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.569338 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "928c64fe-ab31-4942-9bdd-64ab8b8339aa" (UID: "928c64fe-ab31-4942-9bdd-64ab8b8339aa"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.645531 4931 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/928c64fe-ab31-4942-9bdd-64ab8b8339aa-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.645573 4931 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.645583 4931 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.645591 4931 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/928c64fe-ab31-4942-9bdd-64ab8b8339aa-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.645600 4931 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.645609 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l477s\" (UniqueName: \"kubernetes.io/projected/928c64fe-ab31-4942-9bdd-64ab8b8339aa-kube-api-access-l477s\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:07.645617 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/928c64fe-ab31-4942-9bdd-64ab8b8339aa-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:08.122146 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" event={"ID":"928c64fe-ab31-4942-9bdd-64ab8b8339aa","Type":"ContainerDied","Data":"08ca779c76d87b2ff92b6989ac213e472a51989a3e6133f4f950e3835843345d"} Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:08.122203 4931 scope.go:117] "RemoveContainer" containerID="3c3ec3e567bbdff59dacfdb58ac9ca1cb2319c97dab9b819f72417a98c7ddde2" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:08.122318 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w528s" Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:08.141286 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w528s"] Jan 31 04:31:08 crc kubenswrapper[4931]: I0131 04:31:08.146803 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w528s"] Jan 31 04:31:09 crc kubenswrapper[4931]: I0131 04:31:09.905642 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928c64fe-ab31-4942-9bdd-64ab8b8339aa" path="/var/lib/kubelet/pods/928c64fe-ab31-4942-9bdd-64ab8b8339aa/volumes" Jan 31 04:31:21 crc kubenswrapper[4931]: I0131 04:31:21.133115 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:31:21 crc kubenswrapper[4931]: I0131 04:31:21.133848 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:31:21 crc kubenswrapper[4931]: I0131 04:31:21.133927 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:31:21 crc kubenswrapper[4931]: I0131 04:31:21.134905 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bdf5eb257c3a81d0040b92141c1d8e85526b1c5ea3208409eec00a505ebefc2a"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:31:21 crc kubenswrapper[4931]: I0131 04:31:21.135023 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://bdf5eb257c3a81d0040b92141c1d8e85526b1c5ea3208409eec00a505ebefc2a" gracePeriod=600 Jan 31 04:31:22 crc kubenswrapper[4931]: I0131 04:31:22.213034 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="bdf5eb257c3a81d0040b92141c1d8e85526b1c5ea3208409eec00a505ebefc2a" exitCode=0 Jan 31 04:31:22 crc kubenswrapper[4931]: I0131 04:31:22.213285 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"bdf5eb257c3a81d0040b92141c1d8e85526b1c5ea3208409eec00a505ebefc2a"} Jan 31 04:31:22 crc kubenswrapper[4931]: I0131 04:31:22.213914 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"0e08706d8ef1e05d3d0c33d56f50d56b1d3c482c2a662e7e25c45510ebf58ae7"} Jan 31 04:31:22 crc kubenswrapper[4931]: I0131 04:31:22.213990 4931 scope.go:117] "RemoveContainer" containerID="a6ec4af310c8b2d7f5df01f1e0d7747260b332c5446a36c2f19f0d7373fb68e9" Jan 31 04:33:21 crc kubenswrapper[4931]: I0131 04:33:21.133407 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:33:21 crc kubenswrapper[4931]: I0131 04:33:21.134065 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:33:51 crc kubenswrapper[4931]: I0131 04:33:51.133657 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:33:51 crc kubenswrapper[4931]: I0131 04:33:51.134750 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:34:21 crc kubenswrapper[4931]: I0131 04:34:21.133388 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:34:21 crc kubenswrapper[4931]: I0131 04:34:21.134860 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:34:21 crc kubenswrapper[4931]: I0131 04:34:21.134909 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:34:21 crc kubenswrapper[4931]: I0131 04:34:21.135609 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e08706d8ef1e05d3d0c33d56f50d56b1d3c482c2a662e7e25c45510ebf58ae7"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:34:21 crc kubenswrapper[4931]: I0131 04:34:21.135666 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://0e08706d8ef1e05d3d0c33d56f50d56b1d3c482c2a662e7e25c45510ebf58ae7" gracePeriod=600 Jan 31 04:34:22 crc kubenswrapper[4931]: I0131 04:34:22.242053 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="0e08706d8ef1e05d3d0c33d56f50d56b1d3c482c2a662e7e25c45510ebf58ae7" exitCode=0 Jan 31 04:34:22 crc kubenswrapper[4931]: I0131 04:34:22.242119 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"0e08706d8ef1e05d3d0c33d56f50d56b1d3c482c2a662e7e25c45510ebf58ae7"} Jan 31 04:34:22 crc kubenswrapper[4931]: I0131 04:34:22.242749 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"c4a3ca51a63c7b2c90394d04ff6f72a437e5de1d31d438d34041e2aabc18bbc0"} Jan 31 04:34:22 crc kubenswrapper[4931]: I0131 04:34:22.242774 4931 scope.go:117] "RemoveContainer" containerID="bdf5eb257c3a81d0040b92141c1d8e85526b1c5ea3208409eec00a505ebefc2a" Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.753443 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78mxr"] Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.754706 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb" gracePeriod=30 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.754759 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="nbdb" containerID="cri-o://f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292" gracePeriod=30 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.754882 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="northd" containerID="cri-o://11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5" gracePeriod=30 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.754913 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="sbdb" containerID="cri-o://269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448" gracePeriod=30 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.754971 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovn-acl-logging" containerID="cri-o://d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe" gracePeriod=30 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.754945 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kube-rbac-proxy-node" containerID="cri-o://cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9" gracePeriod=30 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.755030 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovn-controller" containerID="cri-o://bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d" gracePeriod=30 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.813483 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" containerID="cri-o://f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060" gracePeriod=30 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.887394 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/2.log" Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.888551 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/1.log" Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.888610 4931 generic.go:334] "Generic (PLEG): container finished" podID="0be95b57-6df4-4ba6-88e8-acf405e3d6d2" containerID="6240b71dda629dd359164d3a9ec2a02be6499b509a6b40f63d85f9bd579218f9" exitCode=2 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.888667 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r5kkh" event={"ID":"0be95b57-6df4-4ba6-88e8-acf405e3d6d2","Type":"ContainerDied","Data":"6240b71dda629dd359164d3a9ec2a02be6499b509a6b40f63d85f9bd579218f9"} Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.888757 4931 scope.go:117] "RemoveContainer" containerID="1b075057bd7562893eaf55c33d5aa9a0b0e179e3a3f6d92400cd9e864949daf1" Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.889393 4931 scope.go:117] "RemoveContainer" containerID="6240b71dda629dd359164d3a9ec2a02be6499b509a6b40f63d85f9bd579218f9" Jan 31 04:35:52 crc kubenswrapper[4931]: E0131 04:35:52.889710 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-r5kkh_openshift-multus(0be95b57-6df4-4ba6-88e8-acf405e3d6d2)\"" pod="openshift-multus/multus-r5kkh" podUID="0be95b57-6df4-4ba6-88e8-acf405e3d6d2" Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.893293 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/3.log" Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.895489 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovn-controller/0.log" Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.896673 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb" exitCode=0 Jan 31 04:35:52 crc kubenswrapper[4931]: I0131 04:35:52.896701 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.247306 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/3.log" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.249880 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovn-acl-logging/0.log" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.250434 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovn-controller/0.log" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.250904 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307299 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-45w9l"] Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307512 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c64fe-ab31-4942-9bdd-64ab8b8339aa" containerName="registry" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307523 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c64fe-ab31-4942-9bdd-64ab8b8339aa" containerName="registry" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307534 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kube-rbac-proxy-node" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307542 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kube-rbac-proxy-node" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307552 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307559 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307572 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="northd" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307580 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="northd" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307590 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307598 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307606 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kubecfg-setup" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307613 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kubecfg-setup" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307621 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovn-acl-logging" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307628 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovn-acl-logging" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307636 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307642 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307653 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="nbdb" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307659 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="nbdb" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307668 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="sbdb" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307674 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="sbdb" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307682 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovn-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307687 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovn-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307694 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307700 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.307733 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307739 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307844 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307857 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovn-acl-logging" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307864 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307871 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307877 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="sbdb" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307887 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307894 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307903 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="nbdb" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307910 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="northd" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307916 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c64fe-ab31-4942-9bdd-64ab8b8339aa" containerName="registry" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307923 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovn-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.307931 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="kube-rbac-proxy-node" Jan 31 04:35:53 crc kubenswrapper[4931]: E0131 04:35:53.308015 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.308021 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.308122 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" containerName="ovnkube-controller" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.310930 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431583 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-systemd-units\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431628 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-config\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431650 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-node-log\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431666 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8bpr\" (UniqueName: \"kubernetes.io/projected/5f2e5660-13d8-4896-bad5-008e165ba847-kube-api-access-q8bpr\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431698 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-script-lib\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431747 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431770 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-var-lib-openvswitch\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431791 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5f2e5660-13d8-4896-bad5-008e165ba847-ovn-node-metrics-cert\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431807 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-env-overrides\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431824 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-log-socket\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431845 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-netns\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.431873 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-slash\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432491 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-systemd\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432507 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-kubelet\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432530 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-ovn\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432549 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-netd\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432567 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-openvswitch\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432581 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-ovn-kubernetes\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432600 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-bin\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432618 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-etc-openvswitch\") pod \"5f2e5660-13d8-4896-bad5-008e165ba847\" (UID: \"5f2e5660-13d8-4896-bad5-008e165ba847\") " Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432792 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a38b4421-4b52-4029-90a7-0cf945cd2c20-env-overrides\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432819 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a38b4421-4b52-4029-90a7-0cf945cd2c20-ovnkube-script-lib\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432819 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-log-socket" (OuterVolumeSpecName: "log-socket") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432837 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vp9j\" (UniqueName: \"kubernetes.io/projected/a38b4421-4b52-4029-90a7-0cf945cd2c20-kube-api-access-9vp9j\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432884 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432898 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432912 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432925 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-systemd-units\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432935 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432944 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-log-socket\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432954 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432967 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-run-openvswitch\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432983 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432987 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-run-netns\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.432990 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433020 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-etc-openvswitch\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433071 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433165 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-node-log\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433215 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-slash\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433241 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-cni-netd\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433260 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a38b4421-4b52-4029-90a7-0cf945cd2c20-ovn-node-metrics-cert\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433268 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433300 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433309 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-run-ovn-kubernetes\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433323 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-slash" (OuterVolumeSpecName: "host-slash") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433331 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a38b4421-4b52-4029-90a7-0cf945cd2c20-ovnkube-config\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433349 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-run-systemd\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433366 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-run-ovn\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433376 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433394 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-node-log" (OuterVolumeSpecName: "node-log") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433397 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-cni-bin\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433441 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-kubelet\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433450 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433520 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-var-lib-openvswitch\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433544 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433629 4931 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433645 4931 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433655 4931 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433665 4931 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433673 4931 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433682 4931 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433690 4931 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433697 4931 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433706 4931 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433714 4931 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433738 4931 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433746 4931 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433756 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433764 4931 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.433799 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.434088 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.438711 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2e5660-13d8-4896-bad5-008e165ba847-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.439356 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2e5660-13d8-4896-bad5-008e165ba847-kube-api-access-q8bpr" (OuterVolumeSpecName: "kube-api-access-q8bpr") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "kube-api-access-q8bpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.457199 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5f2e5660-13d8-4896-bad5-008e165ba847" (UID: "5f2e5660-13d8-4896-bad5-008e165ba847"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535139 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-run-openvswitch\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535211 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-run-netns\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535263 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-etc-openvswitch\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535325 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-node-log\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535376 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-slash\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535414 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-cni-netd\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535450 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a38b4421-4b52-4029-90a7-0cf945cd2c20-ovn-node-metrics-cert\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535506 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-run-ovn-kubernetes\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535562 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a38b4421-4b52-4029-90a7-0cf945cd2c20-ovnkube-config\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535605 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-run-systemd\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535612 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-node-log\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535637 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-run-ovn\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535702 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-run-ovn\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535758 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-cni-bin\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535708 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-cni-bin\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535804 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-run-openvswitch\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535839 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-run-netns\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535869 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-etc-openvswitch\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535874 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-kubelet\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535926 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-var-lib-openvswitch\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.535967 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a38b4421-4b52-4029-90a7-0cf945cd2c20-env-overrides\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536012 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a38b4421-4b52-4029-90a7-0cf945cd2c20-ovnkube-script-lib\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536047 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vp9j\" (UniqueName: \"kubernetes.io/projected/a38b4421-4b52-4029-90a7-0cf945cd2c20-kube-api-access-9vp9j\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536097 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536144 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-systemd-units\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536176 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-log-socket\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536247 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8bpr\" (UniqueName: \"kubernetes.io/projected/5f2e5660-13d8-4896-bad5-008e165ba847-kube-api-access-q8bpr\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536284 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536320 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5f2e5660-13d8-4896-bad5-008e165ba847-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536340 4931 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5f2e5660-13d8-4896-bad5-008e165ba847-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536365 4931 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536384 4931 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f2e5660-13d8-4896-bad5-008e165ba847-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536436 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-log-socket\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536630 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536634 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-kubelet\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536692 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-slash\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536729 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-cni-netd\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536753 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-systemd-units\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536777 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-var-lib-openvswitch\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.536940 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-host-run-ovn-kubernetes\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.537047 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a38b4421-4b52-4029-90a7-0cf945cd2c20-run-systemd\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.537237 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a38b4421-4b52-4029-90a7-0cf945cd2c20-env-overrides\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.537507 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a38b4421-4b52-4029-90a7-0cf945cd2c20-ovnkube-script-lib\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.538075 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a38b4421-4b52-4029-90a7-0cf945cd2c20-ovnkube-config\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.541895 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a38b4421-4b52-4029-90a7-0cf945cd2c20-ovn-node-metrics-cert\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.556304 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vp9j\" (UniqueName: \"kubernetes.io/projected/a38b4421-4b52-4029-90a7-0cf945cd2c20-kube-api-access-9vp9j\") pod \"ovnkube-node-45w9l\" (UID: \"a38b4421-4b52-4029-90a7-0cf945cd2c20\") " pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.624943 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.903302 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/2.log" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.905577 4931 generic.go:334] "Generic (PLEG): container finished" podID="a38b4421-4b52-4029-90a7-0cf945cd2c20" containerID="cabd91ed3a308406ef4198794d8475523571457ca50daeb1326e7e81d092fc77" exitCode=0 Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.905626 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerDied","Data":"cabd91ed3a308406ef4198794d8475523571457ca50daeb1326e7e81d092fc77"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.905677 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerStarted","Data":"de7b1d1c996bb03af15c368a70b62ca979b692861c24525919c35e33be14447f"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.908683 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovnkube-controller/3.log" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.911351 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovn-acl-logging/0.log" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.911855 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78mxr_5f2e5660-13d8-4896-bad5-008e165ba847/ovn-controller/0.log" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912285 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060" exitCode=0 Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912303 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448" exitCode=0 Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912312 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292" exitCode=0 Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912318 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5" exitCode=0 Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912324 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9" exitCode=0 Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912332 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe" exitCode=143 Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912338 4931 generic.go:334] "Generic (PLEG): container finished" podID="5f2e5660-13d8-4896-bad5-008e165ba847" containerID="bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d" exitCode=143 Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912359 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912387 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912401 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912411 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912414 4931 scope.go:117] "RemoveContainer" containerID="f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912419 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912509 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912529 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912543 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912550 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912559 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912567 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912573 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912578 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912583 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912588 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912595 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912602 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912608 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912614 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912619 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912624 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912629 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912634 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912639 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912643 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912648 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912655 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" event={"ID":"5f2e5660-13d8-4896-bad5-008e165ba847","Type":"ContainerDied","Data":"3fc2c69ac5190965e31f4add0aa0f4113fd124e32bb9986f4eb580cd4aa176e8"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912663 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912669 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912675 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912680 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912685 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912690 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912694 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912700 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912704 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912709 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8"} Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.912396 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.970337 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:35:53 crc kubenswrapper[4931]: I0131 04:35:53.992774 4931 scope.go:117] "RemoveContainer" containerID="269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.022599 4931 scope.go:117] "RemoveContainer" containerID="f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.034460 4931 scope.go:117] "RemoveContainer" containerID="11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.049782 4931 scope.go:117] "RemoveContainer" containerID="907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.066677 4931 scope.go:117] "RemoveContainer" containerID="cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.098565 4931 scope.go:117] "RemoveContainer" containerID="d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.119278 4931 scope.go:117] "RemoveContainer" containerID="bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.146987 4931 scope.go:117] "RemoveContainer" containerID="d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.164604 4931 scope.go:117] "RemoveContainer" containerID="f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.165145 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": container with ID starting with f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060 not found: ID does not exist" containerID="f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.165199 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060"} err="failed to get container status \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": rpc error: code = NotFound desc = could not find container \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": container with ID starting with f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.165229 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.165500 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\": container with ID starting with f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803 not found: ID does not exist" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.165537 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803"} err="failed to get container status \"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\": rpc error: code = NotFound desc = could not find container \"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\": container with ID starting with f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.165567 4931 scope.go:117] "RemoveContainer" containerID="269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.165957 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\": container with ID starting with 269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448 not found: ID does not exist" containerID="269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.165979 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448"} err="failed to get container status \"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\": rpc error: code = NotFound desc = could not find container \"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\": container with ID starting with 269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.165994 4931 scope.go:117] "RemoveContainer" containerID="f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.166295 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\": container with ID starting with f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292 not found: ID does not exist" containerID="f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.166325 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292"} err="failed to get container status \"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\": rpc error: code = NotFound desc = could not find container \"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\": container with ID starting with f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.166343 4931 scope.go:117] "RemoveContainer" containerID="11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.166623 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\": container with ID starting with 11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5 not found: ID does not exist" containerID="11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.166656 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5"} err="failed to get container status \"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\": rpc error: code = NotFound desc = could not find container \"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\": container with ID starting with 11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.166687 4931 scope.go:117] "RemoveContainer" containerID="907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.166985 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\": container with ID starting with 907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb not found: ID does not exist" containerID="907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.167003 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb"} err="failed to get container status \"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\": rpc error: code = NotFound desc = could not find container \"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\": container with ID starting with 907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.167018 4931 scope.go:117] "RemoveContainer" containerID="cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.167250 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\": container with ID starting with cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9 not found: ID does not exist" containerID="cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.167276 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9"} err="failed to get container status \"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\": rpc error: code = NotFound desc = could not find container \"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\": container with ID starting with cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.167294 4931 scope.go:117] "RemoveContainer" containerID="d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.167548 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\": container with ID starting with d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe not found: ID does not exist" containerID="d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.167571 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe"} err="failed to get container status \"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\": rpc error: code = NotFound desc = could not find container \"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\": container with ID starting with d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.167584 4931 scope.go:117] "RemoveContainer" containerID="bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.167825 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\": container with ID starting with bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d not found: ID does not exist" containerID="bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.167847 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d"} err="failed to get container status \"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\": rpc error: code = NotFound desc = could not find container \"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\": container with ID starting with bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.167861 4931 scope.go:117] "RemoveContainer" containerID="d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8" Jan 31 04:35:54 crc kubenswrapper[4931]: E0131 04:35:54.168115 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\": container with ID starting with d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8 not found: ID does not exist" containerID="d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.168135 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8"} err="failed to get container status \"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\": rpc error: code = NotFound desc = could not find container \"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\": container with ID starting with d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.168149 4931 scope.go:117] "RemoveContainer" containerID="f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.168374 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060"} err="failed to get container status \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": rpc error: code = NotFound desc = could not find container \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": container with ID starting with f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.168404 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.168749 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803"} err="failed to get container status \"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\": rpc error: code = NotFound desc = could not find container \"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\": container with ID starting with f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.168802 4931 scope.go:117] "RemoveContainer" containerID="269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.169098 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448"} err="failed to get container status \"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\": rpc error: code = NotFound desc = could not find container \"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\": container with ID starting with 269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.169126 4931 scope.go:117] "RemoveContainer" containerID="f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.169446 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292"} err="failed to get container status \"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\": rpc error: code = NotFound desc = could not find container \"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\": container with ID starting with f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.169479 4931 scope.go:117] "RemoveContainer" containerID="11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.169758 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5"} err="failed to get container status \"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\": rpc error: code = NotFound desc = could not find container \"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\": container with ID starting with 11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.169790 4931 scope.go:117] "RemoveContainer" containerID="907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.170096 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb"} err="failed to get container status \"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\": rpc error: code = NotFound desc = could not find container \"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\": container with ID starting with 907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.170127 4931 scope.go:117] "RemoveContainer" containerID="cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.170514 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9"} err="failed to get container status \"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\": rpc error: code = NotFound desc = could not find container \"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\": container with ID starting with cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.170541 4931 scope.go:117] "RemoveContainer" containerID="d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.170818 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe"} err="failed to get container status \"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\": rpc error: code = NotFound desc = could not find container \"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\": container with ID starting with d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.170847 4931 scope.go:117] "RemoveContainer" containerID="bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.171086 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d"} err="failed to get container status \"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\": rpc error: code = NotFound desc = could not find container \"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\": container with ID starting with bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.171111 4931 scope.go:117] "RemoveContainer" containerID="d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.171374 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8"} err="failed to get container status \"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\": rpc error: code = NotFound desc = could not find container \"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\": container with ID starting with d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.171404 4931 scope.go:117] "RemoveContainer" containerID="f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.171623 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060"} err="failed to get container status \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": rpc error: code = NotFound desc = could not find container \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": container with ID starting with f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.171650 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.172062 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803"} err="failed to get container status \"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\": rpc error: code = NotFound desc = could not find container \"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\": container with ID starting with f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.172093 4931 scope.go:117] "RemoveContainer" containerID="269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.172354 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448"} err="failed to get container status \"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\": rpc error: code = NotFound desc = could not find container \"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\": container with ID starting with 269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.172378 4931 scope.go:117] "RemoveContainer" containerID="f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.172582 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292"} err="failed to get container status \"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\": rpc error: code = NotFound desc = could not find container \"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\": container with ID starting with f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.172616 4931 scope.go:117] "RemoveContainer" containerID="11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.172840 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5"} err="failed to get container status \"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\": rpc error: code = NotFound desc = could not find container \"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\": container with ID starting with 11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.172866 4931 scope.go:117] "RemoveContainer" containerID="907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.173091 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb"} err="failed to get container status \"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\": rpc error: code = NotFound desc = could not find container \"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\": container with ID starting with 907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.173117 4931 scope.go:117] "RemoveContainer" containerID="cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.173348 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9"} err="failed to get container status \"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\": rpc error: code = NotFound desc = could not find container \"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\": container with ID starting with cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.173392 4931 scope.go:117] "RemoveContainer" containerID="d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.173611 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe"} err="failed to get container status \"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\": rpc error: code = NotFound desc = could not find container \"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\": container with ID starting with d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.173634 4931 scope.go:117] "RemoveContainer" containerID="bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.173951 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d"} err="failed to get container status \"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\": rpc error: code = NotFound desc = could not find container \"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\": container with ID starting with bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.173992 4931 scope.go:117] "RemoveContainer" containerID="d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.174298 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8"} err="failed to get container status \"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\": rpc error: code = NotFound desc = could not find container \"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\": container with ID starting with d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.174320 4931 scope.go:117] "RemoveContainer" containerID="f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.174591 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060"} err="failed to get container status \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": rpc error: code = NotFound desc = could not find container \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": container with ID starting with f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.174617 4931 scope.go:117] "RemoveContainer" containerID="f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.174957 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803"} err="failed to get container status \"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\": rpc error: code = NotFound desc = could not find container \"f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803\": container with ID starting with f2e78bdab6f40dd508efefb20c7c57b3d79d12a036bed2c652493b5534883803 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.174984 4931 scope.go:117] "RemoveContainer" containerID="269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.175193 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448"} err="failed to get container status \"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\": rpc error: code = NotFound desc = could not find container \"269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448\": container with ID starting with 269eac64f5f51331670bdbb5e1ca7ac7bd7cff17d59e4802d6b20ff380efd448 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.175210 4931 scope.go:117] "RemoveContainer" containerID="f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.175401 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292"} err="failed to get container status \"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\": rpc error: code = NotFound desc = could not find container \"f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292\": container with ID starting with f16885ebbd453f17bee45b17a0c152734bc77e95472fc2c0cb6cbed046df8292 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.175421 4931 scope.go:117] "RemoveContainer" containerID="11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.175625 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5"} err="failed to get container status \"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\": rpc error: code = NotFound desc = could not find container \"11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5\": container with ID starting with 11a1e3d17bfa129dc918ddb4c56a81de5d11510a1e0c20ea777c583a1f4fbee5 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.175658 4931 scope.go:117] "RemoveContainer" containerID="907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.175937 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb"} err="failed to get container status \"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\": rpc error: code = NotFound desc = could not find container \"907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb\": container with ID starting with 907b665114d8083cff6dbcee0f5a1291a8bbeacbe126b62d2654a290d15baaeb not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.175959 4931 scope.go:117] "RemoveContainer" containerID="cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.176163 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9"} err="failed to get container status \"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\": rpc error: code = NotFound desc = could not find container \"cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9\": container with ID starting with cc8ca980904139c4984490d822efe231d42878f1d896ccaf83bd6c8e5857d0b9 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.176186 4931 scope.go:117] "RemoveContainer" containerID="d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.176355 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe"} err="failed to get container status \"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\": rpc error: code = NotFound desc = could not find container \"d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe\": container with ID starting with d17d378c4819d34bd16efd2328601389b100ce2c6ddfdb83dd01c7017d4cbcfe not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.176373 4931 scope.go:117] "RemoveContainer" containerID="bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.176527 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d"} err="failed to get container status \"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\": rpc error: code = NotFound desc = could not find container \"bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d\": container with ID starting with bfa93dbf638f22cd0f67b589b61380903ed1ac1ed9348a9cc8ca5dd2a411fa9d not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.176542 4931 scope.go:117] "RemoveContainer" containerID="d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.176807 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8"} err="failed to get container status \"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\": rpc error: code = NotFound desc = could not find container \"d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8\": container with ID starting with d0be1a52d68ad6926ab02d8d511d5f3ca1b852af1c689476d9ae29f7f36973a8 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.176823 4931 scope.go:117] "RemoveContainer" containerID="f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.177028 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060"} err="failed to get container status \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": rpc error: code = NotFound desc = could not find container \"f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060\": container with ID starting with f38204520344b52d6e8c3a88b8216e1fdb7e55a7fc59d60f06d6f3e4d4b51060 not found: ID does not exist" Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.920204 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerStarted","Data":"70781a9201dfc494754fee545e3452af43affbd553bbc2fc248a6290f9553704"} Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.920572 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerStarted","Data":"edb7a6e9274e31a01f767c6b09801d45ca6687dc77dfd1704c584bb5218183bf"} Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.920586 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerStarted","Data":"17642db6adbb346f6910e39f1a10c9e528bf3ed806f38bada47f72a5efd8dbf7"} Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.920596 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerStarted","Data":"b9a0871b2a13a644d931561d6a276435431bc6607cd5b51129472d3fe5e8340c"} Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.920606 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerStarted","Data":"ce05154e3cbe2236254f986d565e237fcc86b8ff5a65d49d7847f958eb04125c"} Jan 31 04:35:54 crc kubenswrapper[4931]: I0131 04:35:54.920615 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerStarted","Data":"9379c955ec7a8cf00f739e72c5775f7da72b62afd8a6125816d543e19279a24e"} Jan 31 04:35:56 crc kubenswrapper[4931]: I0131 04:35:56.937869 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerStarted","Data":"fdae3645629638d2a0fdf1f32cf367076c8c51fcbe69d293c908dc9855bc2b0f"} Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.408976 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769"] Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.410554 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.413964 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.512970 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrqrd\" (UniqueName: \"kubernetes.io/projected/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-kube-api-access-rrqrd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.513155 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.513277 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.615171 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.615279 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrqrd\" (UniqueName: \"kubernetes.io/projected/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-kube-api-access-rrqrd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.615342 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.616214 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.616220 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.650956 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrqrd\" (UniqueName: \"kubernetes.io/projected/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-kube-api-access-rrqrd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.724650 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: E0131 04:35:59.759393 4931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(7312b50891a633d456f5fdacbe451b4c825119d7bcb0253d6f955a3dc0bbe7c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:35:59 crc kubenswrapper[4931]: E0131 04:35:59.759504 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(7312b50891a633d456f5fdacbe451b4c825119d7bcb0253d6f955a3dc0bbe7c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: E0131 04:35:59.759563 4931 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(7312b50891a633d456f5fdacbe451b4c825119d7bcb0253d6f955a3dc0bbe7c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:35:59 crc kubenswrapper[4931]: E0131 04:35:59.759645 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace(3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace(3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(7312b50891a633d456f5fdacbe451b4c825119d7bcb0253d6f955a3dc0bbe7c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.974464 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" event={"ID":"a38b4421-4b52-4029-90a7-0cf945cd2c20","Type":"ContainerStarted","Data":"40ea61a76f8e9d25be156ee23b9bafbdbde45c616746602c55d42e120bd59ecd"} Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.975021 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.975049 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:35:59 crc kubenswrapper[4931]: I0131 04:35:59.975059 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:36:00 crc kubenswrapper[4931]: I0131 04:36:00.008194 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769"] Jan 31 04:36:00 crc kubenswrapper[4931]: I0131 04:36:00.008472 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:00 crc kubenswrapper[4931]: I0131 04:36:00.009190 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:00 crc kubenswrapper[4931]: I0131 04:36:00.035851 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:36:00 crc kubenswrapper[4931]: I0131 04:36:00.036782 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" podStartSLOduration=7.036754381 podStartE2EDuration="7.036754381s" podCreationTimestamp="2026-01-31 04:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:36:00.009992615 +0000 UTC m=+718.819221489" watchObservedRunningTime="2026-01-31 04:36:00.036754381 +0000 UTC m=+718.845983255" Jan 31 04:36:00 crc kubenswrapper[4931]: I0131 04:36:00.037414 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:36:00 crc kubenswrapper[4931]: E0131 04:36:00.046878 4931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(ce0001514748b6d90e94dce9ea37e2f92238d77747464d1da250147ce9d0349c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:36:00 crc kubenswrapper[4931]: E0131 04:36:00.047053 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(ce0001514748b6d90e94dce9ea37e2f92238d77747464d1da250147ce9d0349c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:00 crc kubenswrapper[4931]: E0131 04:36:00.047095 4931 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(ce0001514748b6d90e94dce9ea37e2f92238d77747464d1da250147ce9d0349c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:00 crc kubenswrapper[4931]: E0131 04:36:00.047165 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace(3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace(3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(ce0001514748b6d90e94dce9ea37e2f92238d77747464d1da250147ce9d0349c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" Jan 31 04:36:06 crc kubenswrapper[4931]: I0131 04:36:06.896563 4931 scope.go:117] "RemoveContainer" containerID="6240b71dda629dd359164d3a9ec2a02be6499b509a6b40f63d85f9bd579218f9" Jan 31 04:36:06 crc kubenswrapper[4931]: E0131 04:36:06.897353 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-r5kkh_openshift-multus(0be95b57-6df4-4ba6-88e8-acf405e3d6d2)\"" pod="openshift-multus/multus-r5kkh" podUID="0be95b57-6df4-4ba6-88e8-acf405e3d6d2" Jan 31 04:36:13 crc kubenswrapper[4931]: I0131 04:36:13.897152 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:13 crc kubenswrapper[4931]: I0131 04:36:13.901153 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:13 crc kubenswrapper[4931]: E0131 04:36:13.934584 4931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(8be80382369f7e8cff571dc77c10e4d10ecfc700a0c193c227cbb66271be16fe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:36:13 crc kubenswrapper[4931]: E0131 04:36:13.934747 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(8be80382369f7e8cff571dc77c10e4d10ecfc700a0c193c227cbb66271be16fe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:13 crc kubenswrapper[4931]: E0131 04:36:13.934781 4931 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(8be80382369f7e8cff571dc77c10e4d10ecfc700a0c193c227cbb66271be16fe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:13 crc kubenswrapper[4931]: E0131 04:36:13.934860 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace(3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace(3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_openshift-marketplace_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce_0(8be80382369f7e8cff571dc77c10e4d10ecfc700a0c193c227cbb66271be16fe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" Jan 31 04:36:21 crc kubenswrapper[4931]: I0131 04:36:21.133854 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:36:21 crc kubenswrapper[4931]: I0131 04:36:21.134708 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:36:21 crc kubenswrapper[4931]: I0131 04:36:21.901145 4931 scope.go:117] "RemoveContainer" containerID="6240b71dda629dd359164d3a9ec2a02be6499b509a6b40f63d85f9bd579218f9" Jan 31 04:36:22 crc kubenswrapper[4931]: I0131 04:36:22.124494 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r5kkh_0be95b57-6df4-4ba6-88e8-acf405e3d6d2/kube-multus/2.log" Jan 31 04:36:22 crc kubenswrapper[4931]: I0131 04:36:22.124545 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r5kkh" event={"ID":"0be95b57-6df4-4ba6-88e8-acf405e3d6d2","Type":"ContainerStarted","Data":"df27614696b4a62c54ed691c5c9f62857725859252ec2bdcc0df0e6984f113c1"} Jan 31 04:36:23 crc kubenswrapper[4931]: I0131 04:36:23.652398 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45w9l" Jan 31 04:36:23 crc kubenswrapper[4931]: I0131 04:36:23.955398 4931 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod5f2e5660-13d8-4896-bad5-008e165ba847"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod5f2e5660-13d8-4896-bad5-008e165ba847] : Timed out while waiting for systemd to remove kubepods-burstable-pod5f2e5660_13d8_4896_bad5_008e165ba847.slice" Jan 31 04:36:23 crc kubenswrapper[4931]: E0131 04:36:23.955488 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod5f2e5660-13d8-4896-bad5-008e165ba847] : unable to destroy cgroup paths for cgroup [kubepods burstable pod5f2e5660-13d8-4896-bad5-008e165ba847] : Timed out while waiting for systemd to remove kubepods-burstable-pod5f2e5660_13d8_4896_bad5_008e165ba847.slice" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" Jan 31 04:36:24 crc kubenswrapper[4931]: I0131 04:36:24.138272 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78mxr" Jan 31 04:36:24 crc kubenswrapper[4931]: I0131 04:36:24.169023 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78mxr"] Jan 31 04:36:24 crc kubenswrapper[4931]: I0131 04:36:24.175195 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78mxr"] Jan 31 04:36:24 crc kubenswrapper[4931]: I0131 04:36:24.896437 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:24 crc kubenswrapper[4931]: I0131 04:36:24.897286 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:25 crc kubenswrapper[4931]: I0131 04:36:25.221618 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769"] Jan 31 04:36:25 crc kubenswrapper[4931]: W0131 04:36:25.225890 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b1478e9_7a8e_4195_ba7c_f6f0f8cbbfce.slice/crio-7010c1cbb8fbd0ca6b8fff4af2c57f8d3826f178129b5773cc9d35dc02371146 WatchSource:0}: Error finding container 7010c1cbb8fbd0ca6b8fff4af2c57f8d3826f178129b5773cc9d35dc02371146: Status 404 returned error can't find the container with id 7010c1cbb8fbd0ca6b8fff4af2c57f8d3826f178129b5773cc9d35dc02371146 Jan 31 04:36:25 crc kubenswrapper[4931]: I0131 04:36:25.905597 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f2e5660-13d8-4896-bad5-008e165ba847" path="/var/lib/kubelet/pods/5f2e5660-13d8-4896-bad5-008e165ba847/volumes" Jan 31 04:36:26 crc kubenswrapper[4931]: I0131 04:36:26.153139 4931 generic.go:334] "Generic (PLEG): container finished" podID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerID="4208b7c4f7cae2030e6b83fb9a607c28cb8e1859c57b8bde94bf8f5dedd53583" exitCode=0 Jan 31 04:36:26 crc kubenswrapper[4931]: I0131 04:36:26.153188 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" event={"ID":"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce","Type":"ContainerDied","Data":"4208b7c4f7cae2030e6b83fb9a607c28cb8e1859c57b8bde94bf8f5dedd53583"} Jan 31 04:36:26 crc kubenswrapper[4931]: I0131 04:36:26.153216 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" event={"ID":"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce","Type":"ContainerStarted","Data":"7010c1cbb8fbd0ca6b8fff4af2c57f8d3826f178129b5773cc9d35dc02371146"} Jan 31 04:36:26 crc kubenswrapper[4931]: I0131 04:36:26.154902 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:36:28 crc kubenswrapper[4931]: I0131 04:36:28.170921 4931 generic.go:334] "Generic (PLEG): container finished" podID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerID="0be54ed43d2ed97fe783faa39f4dc119897e2c733ed71b91e708a8a72d250e0a" exitCode=0 Jan 31 04:36:28 crc kubenswrapper[4931]: I0131 04:36:28.171065 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" event={"ID":"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce","Type":"ContainerDied","Data":"0be54ed43d2ed97fe783faa39f4dc119897e2c733ed71b91e708a8a72d250e0a"} Jan 31 04:36:29 crc kubenswrapper[4931]: I0131 04:36:29.177475 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" event={"ID":"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce","Type":"ContainerStarted","Data":"aa05e6b03308824845180964d856bdeeff73d5e29374055f9b00b0f7ec8174ba"} Jan 31 04:36:30 crc kubenswrapper[4931]: I0131 04:36:30.187080 4931 generic.go:334] "Generic (PLEG): container finished" podID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerID="aa05e6b03308824845180964d856bdeeff73d5e29374055f9b00b0f7ec8174ba" exitCode=0 Jan 31 04:36:30 crc kubenswrapper[4931]: I0131 04:36:30.187241 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" event={"ID":"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce","Type":"ContainerDied","Data":"aa05e6b03308824845180964d856bdeeff73d5e29374055f9b00b0f7ec8174ba"} Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.445541 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.569373 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrqrd\" (UniqueName: \"kubernetes.io/projected/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-kube-api-access-rrqrd\") pod \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.569475 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-bundle\") pod \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.569525 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-util\") pod \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\" (UID: \"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce\") " Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.570922 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-bundle" (OuterVolumeSpecName: "bundle") pod "3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" (UID: "3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.574699 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-kube-api-access-rrqrd" (OuterVolumeSpecName: "kube-api-access-rrqrd") pod "3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" (UID: "3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce"). InnerVolumeSpecName "kube-api-access-rrqrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.580181 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-util" (OuterVolumeSpecName: "util") pod "3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" (UID: "3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.671417 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrqrd\" (UniqueName: \"kubernetes.io/projected/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-kube-api-access-rrqrd\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.671494 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:31 crc kubenswrapper[4931]: I0131 04:36:31.671508 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:36:32 crc kubenswrapper[4931]: I0131 04:36:32.198986 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" event={"ID":"3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce","Type":"ContainerDied","Data":"7010c1cbb8fbd0ca6b8fff4af2c57f8d3826f178129b5773cc9d35dc02371146"} Jan 31 04:36:32 crc kubenswrapper[4931]: I0131 04:36:32.199025 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7010c1cbb8fbd0ca6b8fff4af2c57f8d3826f178129b5773cc9d35dc02371146" Jan 31 04:36:32 crc kubenswrapper[4931]: I0131 04:36:32.199026 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769" Jan 31 04:36:35 crc kubenswrapper[4931]: I0131 04:36:35.742237 4931 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.791927 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-57fb747bf-qc864"] Jan 31 04:36:42 crc kubenswrapper[4931]: E0131 04:36:42.792398 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerName="extract" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.792412 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerName="extract" Jan 31 04:36:42 crc kubenswrapper[4931]: E0131 04:36:42.792424 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerName="util" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.792431 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerName="util" Jan 31 04:36:42 crc kubenswrapper[4931]: E0131 04:36:42.792455 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerName="pull" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.792463 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerName="pull" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.792565 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce" containerName="extract" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.792979 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.795035 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.795075 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.795074 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-pnt9d" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.795662 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.796015 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.806661 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57fb747bf-qc864"] Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.908259 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd4db3c6-d696-4339-b0a4-9283164a27f8-webhook-cert\") pod \"metallb-operator-controller-manager-57fb747bf-qc864\" (UID: \"bd4db3c6-d696-4339-b0a4-9283164a27f8\") " pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.908314 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwwsl\" (UniqueName: \"kubernetes.io/projected/bd4db3c6-d696-4339-b0a4-9283164a27f8-kube-api-access-vwwsl\") pod \"metallb-operator-controller-manager-57fb747bf-qc864\" (UID: \"bd4db3c6-d696-4339-b0a4-9283164a27f8\") " pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:42 crc kubenswrapper[4931]: I0131 04:36:42.908527 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd4db3c6-d696-4339-b0a4-9283164a27f8-apiservice-cert\") pod \"metallb-operator-controller-manager-57fb747bf-qc864\" (UID: \"bd4db3c6-d696-4339-b0a4-9283164a27f8\") " pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.009514 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwwsl\" (UniqueName: \"kubernetes.io/projected/bd4db3c6-d696-4339-b0a4-9283164a27f8-kube-api-access-vwwsl\") pod \"metallb-operator-controller-manager-57fb747bf-qc864\" (UID: \"bd4db3c6-d696-4339-b0a4-9283164a27f8\") " pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.009639 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd4db3c6-d696-4339-b0a4-9283164a27f8-apiservice-cert\") pod \"metallb-operator-controller-manager-57fb747bf-qc864\" (UID: \"bd4db3c6-d696-4339-b0a4-9283164a27f8\") " pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.009704 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd4db3c6-d696-4339-b0a4-9283164a27f8-webhook-cert\") pod \"metallb-operator-controller-manager-57fb747bf-qc864\" (UID: \"bd4db3c6-d696-4339-b0a4-9283164a27f8\") " pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.020826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd4db3c6-d696-4339-b0a4-9283164a27f8-apiservice-cert\") pod \"metallb-operator-controller-manager-57fb747bf-qc864\" (UID: \"bd4db3c6-d696-4339-b0a4-9283164a27f8\") " pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.021947 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd4db3c6-d696-4339-b0a4-9283164a27f8-webhook-cert\") pod \"metallb-operator-controller-manager-57fb747bf-qc864\" (UID: \"bd4db3c6-d696-4339-b0a4-9283164a27f8\") " pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.024828 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94"] Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.025488 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.029962 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.029962 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.030203 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-zgxkz" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.041785 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwwsl\" (UniqueName: \"kubernetes.io/projected/bd4db3c6-d696-4339-b0a4-9283164a27f8-kube-api-access-vwwsl\") pod \"metallb-operator-controller-manager-57fb747bf-qc864\" (UID: \"bd4db3c6-d696-4339-b0a4-9283164a27f8\") " pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.049258 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94"] Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.114627 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.216796 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27dk6\" (UniqueName: \"kubernetes.io/projected/fb52765c-4d03-4b55-84e5-e561f54a06bd-kube-api-access-27dk6\") pod \"metallb-operator-webhook-server-87cfd9976-vvr94\" (UID: \"fb52765c-4d03-4b55-84e5-e561f54a06bd\") " pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.217151 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb52765c-4d03-4b55-84e5-e561f54a06bd-webhook-cert\") pod \"metallb-operator-webhook-server-87cfd9976-vvr94\" (UID: \"fb52765c-4d03-4b55-84e5-e561f54a06bd\") " pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.217191 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb52765c-4d03-4b55-84e5-e561f54a06bd-apiservice-cert\") pod \"metallb-operator-webhook-server-87cfd9976-vvr94\" (UID: \"fb52765c-4d03-4b55-84e5-e561f54a06bd\") " pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.300658 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57fb747bf-qc864"] Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.318906 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb52765c-4d03-4b55-84e5-e561f54a06bd-apiservice-cert\") pod \"metallb-operator-webhook-server-87cfd9976-vvr94\" (UID: \"fb52765c-4d03-4b55-84e5-e561f54a06bd\") " pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.318990 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27dk6\" (UniqueName: \"kubernetes.io/projected/fb52765c-4d03-4b55-84e5-e561f54a06bd-kube-api-access-27dk6\") pod \"metallb-operator-webhook-server-87cfd9976-vvr94\" (UID: \"fb52765c-4d03-4b55-84e5-e561f54a06bd\") " pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.319029 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb52765c-4d03-4b55-84e5-e561f54a06bd-webhook-cert\") pod \"metallb-operator-webhook-server-87cfd9976-vvr94\" (UID: \"fb52765c-4d03-4b55-84e5-e561f54a06bd\") " pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.325739 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb52765c-4d03-4b55-84e5-e561f54a06bd-webhook-cert\") pod \"metallb-operator-webhook-server-87cfd9976-vvr94\" (UID: \"fb52765c-4d03-4b55-84e5-e561f54a06bd\") " pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.326062 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb52765c-4d03-4b55-84e5-e561f54a06bd-apiservice-cert\") pod \"metallb-operator-webhook-server-87cfd9976-vvr94\" (UID: \"fb52765c-4d03-4b55-84e5-e561f54a06bd\") " pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.341169 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27dk6\" (UniqueName: \"kubernetes.io/projected/fb52765c-4d03-4b55-84e5-e561f54a06bd-kube-api-access-27dk6\") pod \"metallb-operator-webhook-server-87cfd9976-vvr94\" (UID: \"fb52765c-4d03-4b55-84e5-e561f54a06bd\") " pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.420083 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:43 crc kubenswrapper[4931]: I0131 04:36:43.710178 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94"] Jan 31 04:36:43 crc kubenswrapper[4931]: W0131 04:36:43.714433 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb52765c_4d03_4b55_84e5_e561f54a06bd.slice/crio-84c0ab81be7b83712900f9094df7486e2373b766d3b0763c30b8e50807db49fb WatchSource:0}: Error finding container 84c0ab81be7b83712900f9094df7486e2373b766d3b0763c30b8e50807db49fb: Status 404 returned error can't find the container with id 84c0ab81be7b83712900f9094df7486e2373b766d3b0763c30b8e50807db49fb Jan 31 04:36:44 crc kubenswrapper[4931]: I0131 04:36:44.260083 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" event={"ID":"fb52765c-4d03-4b55-84e5-e561f54a06bd","Type":"ContainerStarted","Data":"84c0ab81be7b83712900f9094df7486e2373b766d3b0763c30b8e50807db49fb"} Jan 31 04:36:44 crc kubenswrapper[4931]: I0131 04:36:44.261070 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" event={"ID":"bd4db3c6-d696-4339-b0a4-9283164a27f8","Type":"ContainerStarted","Data":"16f2d8792fc5577c9978c3aa9000f33b90363c811419322d9626e4f3488b8461"} Jan 31 04:36:48 crc kubenswrapper[4931]: I0131 04:36:48.286108 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" event={"ID":"bd4db3c6-d696-4339-b0a4-9283164a27f8","Type":"ContainerStarted","Data":"e36ada89b52b2d498f83d1e604c8ff549e152a69da42b031efb81b20c739c147"} Jan 31 04:36:48 crc kubenswrapper[4931]: I0131 04:36:48.286681 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:36:48 crc kubenswrapper[4931]: I0131 04:36:48.288006 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" event={"ID":"fb52765c-4d03-4b55-84e5-e561f54a06bd","Type":"ContainerStarted","Data":"cb63ba190cd07cd5fd9a0618cc24adcec031fd0ef17506f28c010fed6e7b20a4"} Jan 31 04:36:48 crc kubenswrapper[4931]: I0131 04:36:48.288171 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:36:48 crc kubenswrapper[4931]: I0131 04:36:48.314900 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" podStartSLOduration=1.950220243 podStartE2EDuration="6.31487921s" podCreationTimestamp="2026-01-31 04:36:42 +0000 UTC" firstStartedPulling="2026-01-31 04:36:43.31246625 +0000 UTC m=+762.121695124" lastFinishedPulling="2026-01-31 04:36:47.677125217 +0000 UTC m=+766.486354091" observedRunningTime="2026-01-31 04:36:48.312918396 +0000 UTC m=+767.122147280" watchObservedRunningTime="2026-01-31 04:36:48.31487921 +0000 UTC m=+767.124108084" Jan 31 04:36:48 crc kubenswrapper[4931]: I0131 04:36:48.339091 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" podStartSLOduration=1.3152748650000001 podStartE2EDuration="5.339064305s" podCreationTimestamp="2026-01-31 04:36:43 +0000 UTC" firstStartedPulling="2026-01-31 04:36:43.717312647 +0000 UTC m=+762.526541521" lastFinishedPulling="2026-01-31 04:36:47.741102087 +0000 UTC m=+766.550330961" observedRunningTime="2026-01-31 04:36:48.334991323 +0000 UTC m=+767.144220197" watchObservedRunningTime="2026-01-31 04:36:48.339064305 +0000 UTC m=+767.148293179" Jan 31 04:36:51 crc kubenswrapper[4931]: I0131 04:36:51.133223 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:36:51 crc kubenswrapper[4931]: I0131 04:36:51.133701 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:37:03 crc kubenswrapper[4931]: I0131 04:37:03.424317 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-87cfd9976-vvr94" Jan 31 04:37:21 crc kubenswrapper[4931]: I0131 04:37:21.133176 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:37:21 crc kubenswrapper[4931]: I0131 04:37:21.134259 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:37:21 crc kubenswrapper[4931]: I0131 04:37:21.134351 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:37:21 crc kubenswrapper[4931]: I0131 04:37:21.135590 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4a3ca51a63c7b2c90394d04ff6f72a437e5de1d31d438d34041e2aabc18bbc0"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:37:21 crc kubenswrapper[4931]: I0131 04:37:21.135710 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://c4a3ca51a63c7b2c90394d04ff6f72a437e5de1d31d438d34041e2aabc18bbc0" gracePeriod=600 Jan 31 04:37:21 crc kubenswrapper[4931]: E0131 04:37:21.272143 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7d60e8b_e113_470f_93ff_a8a795074642.slice/crio-c4a3ca51a63c7b2c90394d04ff6f72a437e5de1d31d438d34041e2aabc18bbc0.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:37:21 crc kubenswrapper[4931]: I0131 04:37:21.496530 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="c4a3ca51a63c7b2c90394d04ff6f72a437e5de1d31d438d34041e2aabc18bbc0" exitCode=0 Jan 31 04:37:21 crc kubenswrapper[4931]: I0131 04:37:21.496583 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"c4a3ca51a63c7b2c90394d04ff6f72a437e5de1d31d438d34041e2aabc18bbc0"} Jan 31 04:37:21 crc kubenswrapper[4931]: I0131 04:37:21.496624 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"1ceca043a424e437ecc5528abaae063cab9d2263bd81f19cc01aa29b5f8717c7"} Jan 31 04:37:21 crc kubenswrapper[4931]: I0131 04:37:21.496646 4931 scope.go:117] "RemoveContainer" containerID="0e08706d8ef1e05d3d0c33d56f50d56b1d3c482c2a662e7e25c45510ebf58ae7" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.117124 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-57fb747bf-qc864" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.808134 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cftp7"] Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.810549 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.816168 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.816234 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-98xzh" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.818303 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.820274 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc"] Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.821247 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.832378 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.836490 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc"] Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.919586 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qxx5h"] Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.921333 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qxx5h" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.923792 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.923973 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4xglz" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.924228 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.924495 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.932751 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-bndg7"] Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.946928 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.952511 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.956048 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bndg7"] Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.986462 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/aa28e58b-e4e2-49d0-ac45-fae2643816f7-frr-startup\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.986551 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z529\" (UniqueName: \"kubernetes.io/projected/cec6db71-d107-4ab6-b8ad-138e41b728a8-kube-api-access-8z529\") pod \"frr-k8s-webhook-server-7df86c4f6c-8gfbc\" (UID: \"cec6db71-d107-4ab6-b8ad-138e41b728a8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.986577 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa28e58b-e4e2-49d0-ac45-fae2643816f7-metrics-certs\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.986601 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-frr-sockets\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.986625 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cec6db71-d107-4ab6-b8ad-138e41b728a8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8gfbc\" (UID: \"cec6db71-d107-4ab6-b8ad-138e41b728a8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.986641 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-frr-conf\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.986820 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq6h7\" (UniqueName: \"kubernetes.io/projected/aa28e58b-e4e2-49d0-ac45-fae2643816f7-kube-api-access-kq6h7\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.986913 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-metrics\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:23 crc kubenswrapper[4931]: I0131 04:37:23.987072 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-reloader\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.089235 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cec6db71-d107-4ab6-b8ad-138e41b728a8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8gfbc\" (UID: \"cec6db71-d107-4ab6-b8ad-138e41b728a8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.089345 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-frr-conf\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.089414 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21aa5a72-184e-4c7f-9f5b-565a457db5a8-metallb-excludel2\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.090388 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-frr-conf\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.090520 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq6h7\" (UniqueName: \"kubernetes.io/projected/aa28e58b-e4e2-49d0-ac45-fae2643816f7-kube-api-access-kq6h7\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.091179 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-metrics\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.090562 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-metrics\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.091609 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f751e3-7fea-442f-99b7-70d65ff4e802-metrics-certs\") pod \"controller-6968d8fdc4-bndg7\" (UID: \"81f751e3-7fea-442f-99b7-70d65ff4e802\") " pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.091655 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-reloader\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.091702 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21aa5a72-184e-4c7f-9f5b-565a457db5a8-metrics-certs\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092049 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-reloader\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092192 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx8k2\" (UniqueName: \"kubernetes.io/projected/21aa5a72-184e-4c7f-9f5b-565a457db5a8-kube-api-access-kx8k2\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092230 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/aa28e58b-e4e2-49d0-ac45-fae2643816f7-frr-startup\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092271 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z529\" (UniqueName: \"kubernetes.io/projected/cec6db71-d107-4ab6-b8ad-138e41b728a8-kube-api-access-8z529\") pod \"frr-k8s-webhook-server-7df86c4f6c-8gfbc\" (UID: \"cec6db71-d107-4ab6-b8ad-138e41b728a8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092296 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21aa5a72-184e-4c7f-9f5b-565a457db5a8-memberlist\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092321 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81f751e3-7fea-442f-99b7-70d65ff4e802-cert\") pod \"controller-6968d8fdc4-bndg7\" (UID: \"81f751e3-7fea-442f-99b7-70d65ff4e802\") " pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092352 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa28e58b-e4e2-49d0-ac45-fae2643816f7-metrics-certs\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092377 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8cjt\" (UniqueName: \"kubernetes.io/projected/81f751e3-7fea-442f-99b7-70d65ff4e802-kube-api-access-r8cjt\") pod \"controller-6968d8fdc4-bndg7\" (UID: \"81f751e3-7fea-442f-99b7-70d65ff4e802\") " pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092404 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-frr-sockets\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.092781 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/aa28e58b-e4e2-49d0-ac45-fae2643816f7-frr-sockets\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.093873 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/aa28e58b-e4e2-49d0-ac45-fae2643816f7-frr-startup\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.096978 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cec6db71-d107-4ab6-b8ad-138e41b728a8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8gfbc\" (UID: \"cec6db71-d107-4ab6-b8ad-138e41b728a8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.097843 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa28e58b-e4e2-49d0-ac45-fae2643816f7-metrics-certs\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.120396 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq6h7\" (UniqueName: \"kubernetes.io/projected/aa28e58b-e4e2-49d0-ac45-fae2643816f7-kube-api-access-kq6h7\") pod \"frr-k8s-cftp7\" (UID: \"aa28e58b-e4e2-49d0-ac45-fae2643816f7\") " pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.125596 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z529\" (UniqueName: \"kubernetes.io/projected/cec6db71-d107-4ab6-b8ad-138e41b728a8-kube-api-access-8z529\") pod \"frr-k8s-webhook-server-7df86c4f6c-8gfbc\" (UID: \"cec6db71-d107-4ab6-b8ad-138e41b728a8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.141346 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.149884 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.193611 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21aa5a72-184e-4c7f-9f5b-565a457db5a8-metallb-excludel2\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.193675 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f751e3-7fea-442f-99b7-70d65ff4e802-metrics-certs\") pod \"controller-6968d8fdc4-bndg7\" (UID: \"81f751e3-7fea-442f-99b7-70d65ff4e802\") " pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.193738 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21aa5a72-184e-4c7f-9f5b-565a457db5a8-metrics-certs\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.193762 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx8k2\" (UniqueName: \"kubernetes.io/projected/21aa5a72-184e-4c7f-9f5b-565a457db5a8-kube-api-access-kx8k2\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.193803 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21aa5a72-184e-4c7f-9f5b-565a457db5a8-memberlist\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.193822 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81f751e3-7fea-442f-99b7-70d65ff4e802-cert\") pod \"controller-6968d8fdc4-bndg7\" (UID: \"81f751e3-7fea-442f-99b7-70d65ff4e802\") " pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.193845 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8cjt\" (UniqueName: \"kubernetes.io/projected/81f751e3-7fea-442f-99b7-70d65ff4e802-kube-api-access-r8cjt\") pod \"controller-6968d8fdc4-bndg7\" (UID: \"81f751e3-7fea-442f-99b7-70d65ff4e802\") " pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: E0131 04:37:24.194610 4931 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.194675 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21aa5a72-184e-4c7f-9f5b-565a457db5a8-metallb-excludel2\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: E0131 04:37:24.194707 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21aa5a72-184e-4c7f-9f5b-565a457db5a8-memberlist podName:21aa5a72-184e-4c7f-9f5b-565a457db5a8 nodeName:}" failed. No retries permitted until 2026-01-31 04:37:24.694677417 +0000 UTC m=+803.503906291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/21aa5a72-184e-4c7f-9f5b-565a457db5a8-memberlist") pod "speaker-qxx5h" (UID: "21aa5a72-184e-4c7f-9f5b-565a457db5a8") : secret "metallb-memberlist" not found Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.198203 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.198365 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21aa5a72-184e-4c7f-9f5b-565a457db5a8-metrics-certs\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.203302 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f751e3-7fea-442f-99b7-70d65ff4e802-metrics-certs\") pod \"controller-6968d8fdc4-bndg7\" (UID: \"81f751e3-7fea-442f-99b7-70d65ff4e802\") " pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.208764 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81f751e3-7fea-442f-99b7-70d65ff4e802-cert\") pod \"controller-6968d8fdc4-bndg7\" (UID: \"81f751e3-7fea-442f-99b7-70d65ff4e802\") " pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.217242 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx8k2\" (UniqueName: \"kubernetes.io/projected/21aa5a72-184e-4c7f-9f5b-565a457db5a8-kube-api-access-kx8k2\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.219225 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8cjt\" (UniqueName: \"kubernetes.io/projected/81f751e3-7fea-442f-99b7-70d65ff4e802-kube-api-access-r8cjt\") pod \"controller-6968d8fdc4-bndg7\" (UID: \"81f751e3-7fea-442f-99b7-70d65ff4e802\") " pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.270548 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.492262 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bndg7"] Jan 31 04:37:24 crc kubenswrapper[4931]: W0131 04:37:24.501933 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81f751e3_7fea_442f_99b7_70d65ff4e802.slice/crio-7cebaa3ab3b8dca382b05a729cf45d1888035487ee89ec03d0673247d242ede4 WatchSource:0}: Error finding container 7cebaa3ab3b8dca382b05a729cf45d1888035487ee89ec03d0673247d242ede4: Status 404 returned error can't find the container with id 7cebaa3ab3b8dca382b05a729cf45d1888035487ee89ec03d0673247d242ede4 Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.517118 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerStarted","Data":"c052fd36df9642084e705308d33da6f9925501b800e69ee6ade47309f9d626e4"} Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.518038 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bndg7" event={"ID":"81f751e3-7fea-442f-99b7-70d65ff4e802","Type":"ContainerStarted","Data":"7cebaa3ab3b8dca382b05a729cf45d1888035487ee89ec03d0673247d242ede4"} Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.583489 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc"] Jan 31 04:37:24 crc kubenswrapper[4931]: W0131 04:37:24.587939 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcec6db71_d107_4ab6_b8ad_138e41b728a8.slice/crio-e89d076b3fe36e0a7b49a0c22fd0a08cf073f04c6956795164f7efe6146ddc33 WatchSource:0}: Error finding container e89d076b3fe36e0a7b49a0c22fd0a08cf073f04c6956795164f7efe6146ddc33: Status 404 returned error can't find the container with id e89d076b3fe36e0a7b49a0c22fd0a08cf073f04c6956795164f7efe6146ddc33 Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.702110 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21aa5a72-184e-4c7f-9f5b-565a457db5a8-memberlist\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.709321 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21aa5a72-184e-4c7f-9f5b-565a457db5a8-memberlist\") pod \"speaker-qxx5h\" (UID: \"21aa5a72-184e-4c7f-9f5b-565a457db5a8\") " pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: I0131 04:37:24.852191 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qxx5h" Jan 31 04:37:24 crc kubenswrapper[4931]: W0131 04:37:24.868479 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21aa5a72_184e_4c7f_9f5b_565a457db5a8.slice/crio-0b07572f76f6eb40afd8441cc26d2f8f8e1cf1c6ab57563c16043e0a9f9096e2 WatchSource:0}: Error finding container 0b07572f76f6eb40afd8441cc26d2f8f8e1cf1c6ab57563c16043e0a9f9096e2: Status 404 returned error can't find the container with id 0b07572f76f6eb40afd8441cc26d2f8f8e1cf1c6ab57563c16043e0a9f9096e2 Jan 31 04:37:25 crc kubenswrapper[4931]: I0131 04:37:25.531140 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bndg7" event={"ID":"81f751e3-7fea-442f-99b7-70d65ff4e802","Type":"ContainerStarted","Data":"748c4bb5cdaefcdc7c66864e15da793195aab638482d16a0a1dde00b47e01970"} Jan 31 04:37:25 crc kubenswrapper[4931]: I0131 04:37:25.536057 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" event={"ID":"cec6db71-d107-4ab6-b8ad-138e41b728a8","Type":"ContainerStarted","Data":"e89d076b3fe36e0a7b49a0c22fd0a08cf073f04c6956795164f7efe6146ddc33"} Jan 31 04:37:25 crc kubenswrapper[4931]: I0131 04:37:25.537449 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qxx5h" event={"ID":"21aa5a72-184e-4c7f-9f5b-565a457db5a8","Type":"ContainerStarted","Data":"8e55c156341f6f63099ff5330b96bd735812426eefea97338ce9d72dd8ab699a"} Jan 31 04:37:25 crc kubenswrapper[4931]: I0131 04:37:25.537476 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qxx5h" event={"ID":"21aa5a72-184e-4c7f-9f5b-565a457db5a8","Type":"ContainerStarted","Data":"0b07572f76f6eb40afd8441cc26d2f8f8e1cf1c6ab57563c16043e0a9f9096e2"} Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.590389 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qxx5h" event={"ID":"21aa5a72-184e-4c7f-9f5b-565a457db5a8","Type":"ContainerStarted","Data":"f2a63d01e2eb94f5dbda412517edc8e973537acff682e047ed285607dce6653a"} Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.591046 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qxx5h" Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.592496 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bndg7" event={"ID":"81f751e3-7fea-442f-99b7-70d65ff4e802","Type":"ContainerStarted","Data":"64c3a65a988dc8d9ad370f71997cd135ce0818aac5bd5f39f0c8c80110e35ca3"} Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.592633 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.593909 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" event={"ID":"cec6db71-d107-4ab6-b8ad-138e41b728a8","Type":"ContainerStarted","Data":"c734ceaaa004d5beeb83cff53eafc7a1d17ae4b93cb4f22328b807533edc16fc"} Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.594045 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.596333 4931 generic.go:334] "Generic (PLEG): container finished" podID="aa28e58b-e4e2-49d0-ac45-fae2643816f7" containerID="2f1893f1331a9e5412764438ee0197213f3f50cfb567712b47185abf30548973" exitCode=0 Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.596361 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerDied","Data":"2f1893f1331a9e5412764438ee0197213f3f50cfb567712b47185abf30548973"} Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.615540 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qxx5h" podStartSLOduration=2.981404211 podStartE2EDuration="9.615523034s" podCreationTimestamp="2026-01-31 04:37:23 +0000 UTC" firstStartedPulling="2026-01-31 04:37:25.147901297 +0000 UTC m=+803.957130171" lastFinishedPulling="2026-01-31 04:37:31.78202011 +0000 UTC m=+810.591248994" observedRunningTime="2026-01-31 04:37:32.610787002 +0000 UTC m=+811.420015896" watchObservedRunningTime="2026-01-31 04:37:32.615523034 +0000 UTC m=+811.424751908" Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.660280 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-bndg7" podStartSLOduration=2.514017279 podStartE2EDuration="9.660261775s" podCreationTimestamp="2026-01-31 04:37:23 +0000 UTC" firstStartedPulling="2026-01-31 04:37:24.636049162 +0000 UTC m=+803.445278076" lastFinishedPulling="2026-01-31 04:37:31.782293698 +0000 UTC m=+810.591522572" observedRunningTime="2026-01-31 04:37:32.657376814 +0000 UTC m=+811.466605688" watchObservedRunningTime="2026-01-31 04:37:32.660261775 +0000 UTC m=+811.469490649" Jan 31 04:37:32 crc kubenswrapper[4931]: I0131 04:37:32.678307 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" podStartSLOduration=2.447942392 podStartE2EDuration="9.678287708s" podCreationTimestamp="2026-01-31 04:37:23 +0000 UTC" firstStartedPulling="2026-01-31 04:37:24.590778017 +0000 UTC m=+803.400006891" lastFinishedPulling="2026-01-31 04:37:31.821123333 +0000 UTC m=+810.630352207" observedRunningTime="2026-01-31 04:37:32.673995598 +0000 UTC m=+811.483224472" watchObservedRunningTime="2026-01-31 04:37:32.678287708 +0000 UTC m=+811.487516582" Jan 31 04:37:33 crc kubenswrapper[4931]: I0131 04:37:33.604408 4931 generic.go:334] "Generic (PLEG): container finished" podID="aa28e58b-e4e2-49d0-ac45-fae2643816f7" containerID="7531af425558396e7c61d106fed3ad8cdca0a8d24d8ca65e9bfb058146e86781" exitCode=0 Jan 31 04:37:33 crc kubenswrapper[4931]: I0131 04:37:33.604524 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerDied","Data":"7531af425558396e7c61d106fed3ad8cdca0a8d24d8ca65e9bfb058146e86781"} Jan 31 04:37:34 crc kubenswrapper[4931]: I0131 04:37:34.280860 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-bndg7" Jan 31 04:37:34 crc kubenswrapper[4931]: I0131 04:37:34.612060 4931 generic.go:334] "Generic (PLEG): container finished" podID="aa28e58b-e4e2-49d0-ac45-fae2643816f7" containerID="d4a51545ca3739e2281b1af62814491594f6ea498bae4b0db68bd70aba80e6f1" exitCode=0 Jan 31 04:37:34 crc kubenswrapper[4931]: I0131 04:37:34.612105 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerDied","Data":"d4a51545ca3739e2281b1af62814491594f6ea498bae4b0db68bd70aba80e6f1"} Jan 31 04:37:35 crc kubenswrapper[4931]: I0131 04:37:35.622479 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerStarted","Data":"1a764950e597971f0e67e4b5f89bdf12099cd33184fd7f1c2bfb588c26c2d7d7"} Jan 31 04:37:35 crc kubenswrapper[4931]: I0131 04:37:35.622969 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerStarted","Data":"ae190b4453acbb7b5fe5ae91581bac6f59c2fe514a22ae78c589431bac8a1673"} Jan 31 04:37:35 crc kubenswrapper[4931]: I0131 04:37:35.622992 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerStarted","Data":"67ed75cb49f8b730738ae8b6676c802dac3e6f29c2ae863bf5eace7496681888"} Jan 31 04:37:35 crc kubenswrapper[4931]: I0131 04:37:35.623013 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerStarted","Data":"c044301f35b0896280b3734ebadc67a902084bd54ad2c1333d5bc7e3339ad136"} Jan 31 04:37:35 crc kubenswrapper[4931]: I0131 04:37:35.623025 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerStarted","Data":"1b2b11ef626b4b2e608b518b34a9cae8535f1ed5eab067f74c397afc06b93622"} Jan 31 04:37:36 crc kubenswrapper[4931]: I0131 04:37:36.631448 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cftp7" event={"ID":"aa28e58b-e4e2-49d0-ac45-fae2643816f7","Type":"ContainerStarted","Data":"c1739370fe5c6d061d8f43a4056537e9f5ae5b98aacb3efb79c80f1d9164ecde"} Jan 31 04:37:36 crc kubenswrapper[4931]: I0131 04:37:36.631624 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:36 crc kubenswrapper[4931]: I0131 04:37:36.653655 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cftp7" podStartSLOduration=6.180981569 podStartE2EDuration="13.653637647s" podCreationTimestamp="2026-01-31 04:37:23 +0000 UTC" firstStartedPulling="2026-01-31 04:37:24.309571318 +0000 UTC m=+803.118800202" lastFinishedPulling="2026-01-31 04:37:31.782227406 +0000 UTC m=+810.591456280" observedRunningTime="2026-01-31 04:37:36.651334902 +0000 UTC m=+815.460563776" watchObservedRunningTime="2026-01-31 04:37:36.653637647 +0000 UTC m=+815.462866521" Jan 31 04:37:39 crc kubenswrapper[4931]: I0131 04:37:39.141951 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:39 crc kubenswrapper[4931]: I0131 04:37:39.204862 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:44 crc kubenswrapper[4931]: I0131 04:37:44.145831 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cftp7" Jan 31 04:37:44 crc kubenswrapper[4931]: I0131 04:37:44.154546 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8gfbc" Jan 31 04:37:44 crc kubenswrapper[4931]: I0131 04:37:44.855261 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qxx5h" Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.640418 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-jnsjz"] Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.641862 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jnsjz" Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.644282 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-m5v6x" Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.648650 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.648923 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.657253 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-jnsjz"] Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.689298 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvw8\" (UniqueName: \"kubernetes.io/projected/eb3022e6-4463-4e85-ae51-1926207c9ac2-kube-api-access-sfvw8\") pod \"mariadb-operator-index-jnsjz\" (UID: \"eb3022e6-4463-4e85-ae51-1926207c9ac2\") " pod="openstack-operators/mariadb-operator-index-jnsjz" Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.791145 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvw8\" (UniqueName: \"kubernetes.io/projected/eb3022e6-4463-4e85-ae51-1926207c9ac2-kube-api-access-sfvw8\") pod \"mariadb-operator-index-jnsjz\" (UID: \"eb3022e6-4463-4e85-ae51-1926207c9ac2\") " pod="openstack-operators/mariadb-operator-index-jnsjz" Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.820690 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvw8\" (UniqueName: \"kubernetes.io/projected/eb3022e6-4463-4e85-ae51-1926207c9ac2-kube-api-access-sfvw8\") pod \"mariadb-operator-index-jnsjz\" (UID: \"eb3022e6-4463-4e85-ae51-1926207c9ac2\") " pod="openstack-operators/mariadb-operator-index-jnsjz" Jan 31 04:37:50 crc kubenswrapper[4931]: I0131 04:37:50.970007 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jnsjz" Jan 31 04:37:51 crc kubenswrapper[4931]: I0131 04:37:51.192773 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-jnsjz"] Jan 31 04:37:51 crc kubenswrapper[4931]: I0131 04:37:51.744935 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jnsjz" event={"ID":"eb3022e6-4463-4e85-ae51-1926207c9ac2","Type":"ContainerStarted","Data":"9e029aa5a1bcd916360e7952ac2e200bc9e6645249f8c1d2aa1f1eab693745b7"} Jan 31 04:37:53 crc kubenswrapper[4931]: I0131 04:37:53.411487 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-jnsjz"] Jan 31 04:37:54 crc kubenswrapper[4931]: I0131 04:37:54.012480 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-dd29c"] Jan 31 04:37:54 crc kubenswrapper[4931]: I0131 04:37:54.013373 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-dd29c" Jan 31 04:37:54 crc kubenswrapper[4931]: I0131 04:37:54.028748 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-dd29c"] Jan 31 04:37:54 crc kubenswrapper[4931]: I0131 04:37:54.033129 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jssxj\" (UniqueName: \"kubernetes.io/projected/fd79dd5d-c5f9-423b-8e9a-1209131266bc-kube-api-access-jssxj\") pod \"mariadb-operator-index-dd29c\" (UID: \"fd79dd5d-c5f9-423b-8e9a-1209131266bc\") " pod="openstack-operators/mariadb-operator-index-dd29c" Jan 31 04:37:54 crc kubenswrapper[4931]: I0131 04:37:54.133532 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jssxj\" (UniqueName: \"kubernetes.io/projected/fd79dd5d-c5f9-423b-8e9a-1209131266bc-kube-api-access-jssxj\") pod \"mariadb-operator-index-dd29c\" (UID: \"fd79dd5d-c5f9-423b-8e9a-1209131266bc\") " pod="openstack-operators/mariadb-operator-index-dd29c" Jan 31 04:37:54 crc kubenswrapper[4931]: I0131 04:37:54.153703 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jssxj\" (UniqueName: \"kubernetes.io/projected/fd79dd5d-c5f9-423b-8e9a-1209131266bc-kube-api-access-jssxj\") pod \"mariadb-operator-index-dd29c\" (UID: \"fd79dd5d-c5f9-423b-8e9a-1209131266bc\") " pod="openstack-operators/mariadb-operator-index-dd29c" Jan 31 04:37:54 crc kubenswrapper[4931]: I0131 04:37:54.348484 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-dd29c" Jan 31 04:37:56 crc kubenswrapper[4931]: I0131 04:37:56.085921 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-dd29c"] Jan 31 04:37:56 crc kubenswrapper[4931]: W0131 04:37:56.096124 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd79dd5d_c5f9_423b_8e9a_1209131266bc.slice/crio-0689a2674044681ee5de6fb5318e59acbae4d93b35c3a6cf44bc1e7a7a6334c9 WatchSource:0}: Error finding container 0689a2674044681ee5de6fb5318e59acbae4d93b35c3a6cf44bc1e7a7a6334c9: Status 404 returned error can't find the container with id 0689a2674044681ee5de6fb5318e59acbae4d93b35c3a6cf44bc1e7a7a6334c9 Jan 31 04:37:56 crc kubenswrapper[4931]: I0131 04:37:56.782299 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-dd29c" event={"ID":"fd79dd5d-c5f9-423b-8e9a-1209131266bc","Type":"ContainerStarted","Data":"0689a2674044681ee5de6fb5318e59acbae4d93b35c3a6cf44bc1e7a7a6334c9"} Jan 31 04:38:00 crc kubenswrapper[4931]: I0131 04:38:00.829027 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-dd29c" event={"ID":"fd79dd5d-c5f9-423b-8e9a-1209131266bc","Type":"ContainerStarted","Data":"8b86d2a551da37d15559e3c2b740c8ea6646f9d31dfffe0fe83651966de58d5a"} Jan 31 04:38:00 crc kubenswrapper[4931]: I0131 04:38:00.835666 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jnsjz" event={"ID":"eb3022e6-4463-4e85-ae51-1926207c9ac2","Type":"ContainerStarted","Data":"05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874"} Jan 31 04:38:00 crc kubenswrapper[4931]: I0131 04:38:00.835908 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-jnsjz" podUID="eb3022e6-4463-4e85-ae51-1926207c9ac2" containerName="registry-server" containerID="cri-o://05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874" gracePeriod=2 Jan 31 04:38:00 crc kubenswrapper[4931]: I0131 04:38:00.849976 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-dd29c" podStartSLOduration=4.195643502 podStartE2EDuration="7.849957449s" podCreationTimestamp="2026-01-31 04:37:53 +0000 UTC" firstStartedPulling="2026-01-31 04:37:56.097848792 +0000 UTC m=+834.907077656" lastFinishedPulling="2026-01-31 04:37:59.752162729 +0000 UTC m=+838.561391603" observedRunningTime="2026-01-31 04:38:00.847193182 +0000 UTC m=+839.656422136" watchObservedRunningTime="2026-01-31 04:38:00.849957449 +0000 UTC m=+839.659186323" Jan 31 04:38:00 crc kubenswrapper[4931]: I0131 04:38:00.873569 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-jnsjz" podStartSLOduration=2.295960392 podStartE2EDuration="10.873539528s" podCreationTimestamp="2026-01-31 04:37:50 +0000 UTC" firstStartedPulling="2026-01-31 04:37:51.202938205 +0000 UTC m=+830.012167079" lastFinishedPulling="2026-01-31 04:37:59.780517331 +0000 UTC m=+838.589746215" observedRunningTime="2026-01-31 04:38:00.870518654 +0000 UTC m=+839.679747528" watchObservedRunningTime="2026-01-31 04:38:00.873539528 +0000 UTC m=+839.682768442" Jan 31 04:38:00 crc kubenswrapper[4931]: I0131 04:38:00.970339 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-jnsjz" Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.197769 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jnsjz" Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.246746 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfvw8\" (UniqueName: \"kubernetes.io/projected/eb3022e6-4463-4e85-ae51-1926207c9ac2-kube-api-access-sfvw8\") pod \"eb3022e6-4463-4e85-ae51-1926207c9ac2\" (UID: \"eb3022e6-4463-4e85-ae51-1926207c9ac2\") " Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.255827 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3022e6-4463-4e85-ae51-1926207c9ac2-kube-api-access-sfvw8" (OuterVolumeSpecName: "kube-api-access-sfvw8") pod "eb3022e6-4463-4e85-ae51-1926207c9ac2" (UID: "eb3022e6-4463-4e85-ae51-1926207c9ac2"). InnerVolumeSpecName "kube-api-access-sfvw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.348347 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfvw8\" (UniqueName: \"kubernetes.io/projected/eb3022e6-4463-4e85-ae51-1926207c9ac2-kube-api-access-sfvw8\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.852997 4931 generic.go:334] "Generic (PLEG): container finished" podID="eb3022e6-4463-4e85-ae51-1926207c9ac2" containerID="05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874" exitCode=0 Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.853089 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jnsjz" event={"ID":"eb3022e6-4463-4e85-ae51-1926207c9ac2","Type":"ContainerDied","Data":"05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874"} Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.853191 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jnsjz" event={"ID":"eb3022e6-4463-4e85-ae51-1926207c9ac2","Type":"ContainerDied","Data":"9e029aa5a1bcd916360e7952ac2e200bc9e6645249f8c1d2aa1f1eab693745b7"} Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.853240 4931 scope.go:117] "RemoveContainer" containerID="05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874" Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.853118 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jnsjz" Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.886430 4931 scope.go:117] "RemoveContainer" containerID="05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874" Jan 31 04:38:01 crc kubenswrapper[4931]: E0131 04:38:01.887038 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874\": container with ID starting with 05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874 not found: ID does not exist" containerID="05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874" Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.887107 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874"} err="failed to get container status \"05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874\": rpc error: code = NotFound desc = could not find container \"05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874\": container with ID starting with 05e7434b38486fb8f9d007e95cce06416ecb92c72338c569cc01679ba3021874 not found: ID does not exist" Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.906202 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-jnsjz"] Jan 31 04:38:01 crc kubenswrapper[4931]: I0131 04:38:01.906242 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-jnsjz"] Jan 31 04:38:03 crc kubenswrapper[4931]: I0131 04:38:03.909906 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb3022e6-4463-4e85-ae51-1926207c9ac2" path="/var/lib/kubelet/pods/eb3022e6-4463-4e85-ae51-1926207c9ac2/volumes" Jan 31 04:38:04 crc kubenswrapper[4931]: I0131 04:38:04.349647 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-dd29c" Jan 31 04:38:04 crc kubenswrapper[4931]: I0131 04:38:04.349764 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-dd29c" Jan 31 04:38:04 crc kubenswrapper[4931]: I0131 04:38:04.382128 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-dd29c" Jan 31 04:38:04 crc kubenswrapper[4931]: I0131 04:38:04.912449 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-dd29c" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.789196 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f"] Jan 31 04:38:05 crc kubenswrapper[4931]: E0131 04:38:05.790136 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3022e6-4463-4e85-ae51-1926207c9ac2" containerName="registry-server" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.790232 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3022e6-4463-4e85-ae51-1926207c9ac2" containerName="registry-server" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.790433 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3022e6-4463-4e85-ae51-1926207c9ac2" containerName="registry-server" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.791559 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.794039 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rb6qm" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.804992 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f"] Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.875739 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxbm\" (UniqueName: \"kubernetes.io/projected/4446c1a5-9e5b-4b6f-8e57-336a4960e408-kube-api-access-pnxbm\") pod \"d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.875839 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-bundle\") pod \"d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.875963 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-util\") pod \"d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.977145 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxbm\" (UniqueName: \"kubernetes.io/projected/4446c1a5-9e5b-4b6f-8e57-336a4960e408-kube-api-access-pnxbm\") pod \"d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.977258 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-bundle\") pod \"d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.977300 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-util\") pod \"d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.977863 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-util\") pod \"d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:05 crc kubenswrapper[4931]: I0131 04:38:05.977891 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-bundle\") pod \"d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:06 crc kubenswrapper[4931]: I0131 04:38:06.009186 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxbm\" (UniqueName: \"kubernetes.io/projected/4446c1a5-9e5b-4b6f-8e57-336a4960e408-kube-api-access-pnxbm\") pod \"d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:06 crc kubenswrapper[4931]: I0131 04:38:06.111465 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:06 crc kubenswrapper[4931]: I0131 04:38:06.550958 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f"] Jan 31 04:38:06 crc kubenswrapper[4931]: W0131 04:38:06.561836 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4446c1a5_9e5b_4b6f_8e57_336a4960e408.slice/crio-9bb79415039aa4fcd45558f351654dc9c21a00d8fb4d6d4be02a7e2d20d0b36d WatchSource:0}: Error finding container 9bb79415039aa4fcd45558f351654dc9c21a00d8fb4d6d4be02a7e2d20d0b36d: Status 404 returned error can't find the container with id 9bb79415039aa4fcd45558f351654dc9c21a00d8fb4d6d4be02a7e2d20d0b36d Jan 31 04:38:06 crc kubenswrapper[4931]: I0131 04:38:06.889506 4931 generic.go:334] "Generic (PLEG): container finished" podID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerID="d7e45d7756dfc5f0f60bea58437ed8fbac3e8f2cc48dd838432037b04039e351" exitCode=0 Jan 31 04:38:06 crc kubenswrapper[4931]: I0131 04:38:06.889554 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" event={"ID":"4446c1a5-9e5b-4b6f-8e57-336a4960e408","Type":"ContainerDied","Data":"d7e45d7756dfc5f0f60bea58437ed8fbac3e8f2cc48dd838432037b04039e351"} Jan 31 04:38:06 crc kubenswrapper[4931]: I0131 04:38:06.889600 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" event={"ID":"4446c1a5-9e5b-4b6f-8e57-336a4960e408","Type":"ContainerStarted","Data":"9bb79415039aa4fcd45558f351654dc9c21a00d8fb4d6d4be02a7e2d20d0b36d"} Jan 31 04:38:07 crc kubenswrapper[4931]: I0131 04:38:07.896791 4931 generic.go:334] "Generic (PLEG): container finished" podID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerID="f904b19913a71934663a434bc8b688bf30f82265e665090ce54be25d4788c618" exitCode=0 Jan 31 04:38:07 crc kubenswrapper[4931]: I0131 04:38:07.904899 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" event={"ID":"4446c1a5-9e5b-4b6f-8e57-336a4960e408","Type":"ContainerDied","Data":"f904b19913a71934663a434bc8b688bf30f82265e665090ce54be25d4788c618"} Jan 31 04:38:08 crc kubenswrapper[4931]: I0131 04:38:08.904679 4931 generic.go:334] "Generic (PLEG): container finished" podID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerID="8f4de9832f92418ac76bc35ded9b9f72f16ab07cc1a9001821c3f685b345fa28" exitCode=0 Jan 31 04:38:08 crc kubenswrapper[4931]: I0131 04:38:08.904756 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" event={"ID":"4446c1a5-9e5b-4b6f-8e57-336a4960e408","Type":"ContainerDied","Data":"8f4de9832f92418ac76bc35ded9b9f72f16ab07cc1a9001821c3f685b345fa28"} Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.144993 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.149338 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnxbm\" (UniqueName: \"kubernetes.io/projected/4446c1a5-9e5b-4b6f-8e57-336a4960e408-kube-api-access-pnxbm\") pod \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.149418 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-util\") pod \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.149519 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-bundle\") pod \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\" (UID: \"4446c1a5-9e5b-4b6f-8e57-336a4960e408\") " Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.150355 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-bundle" (OuterVolumeSpecName: "bundle") pod "4446c1a5-9e5b-4b6f-8e57-336a4960e408" (UID: "4446c1a5-9e5b-4b6f-8e57-336a4960e408"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.165555 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-util" (OuterVolumeSpecName: "util") pod "4446c1a5-9e5b-4b6f-8e57-336a4960e408" (UID: "4446c1a5-9e5b-4b6f-8e57-336a4960e408"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.167887 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4446c1a5-9e5b-4b6f-8e57-336a4960e408-kube-api-access-pnxbm" (OuterVolumeSpecName: "kube-api-access-pnxbm") pod "4446c1a5-9e5b-4b6f-8e57-336a4960e408" (UID: "4446c1a5-9e5b-4b6f-8e57-336a4960e408"). InnerVolumeSpecName "kube-api-access-pnxbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.250511 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.250553 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnxbm\" (UniqueName: \"kubernetes.io/projected/4446c1a5-9e5b-4b6f-8e57-336a4960e408-kube-api-access-pnxbm\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.250565 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4446c1a5-9e5b-4b6f-8e57-336a4960e408-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.919691 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" event={"ID":"4446c1a5-9e5b-4b6f-8e57-336a4960e408","Type":"ContainerDied","Data":"9bb79415039aa4fcd45558f351654dc9c21a00d8fb4d6d4be02a7e2d20d0b36d"} Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.920405 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb79415039aa4fcd45558f351654dc9c21a00d8fb4d6d4be02a7e2d20d0b36d" Jan 31 04:38:10 crc kubenswrapper[4931]: I0131 04:38:10.919843 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.772704 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm"] Jan 31 04:38:15 crc kubenswrapper[4931]: E0131 04:38:15.773219 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerName="util" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.773230 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerName="util" Jan 31 04:38:15 crc kubenswrapper[4931]: E0131 04:38:15.773242 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerName="pull" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.773248 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerName="pull" Jan 31 04:38:15 crc kubenswrapper[4931]: E0131 04:38:15.773263 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerName="extract" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.773269 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerName="extract" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.773368 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4446c1a5-9e5b-4b6f-8e57-336a4960e408" containerName="extract" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.773920 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.776944 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.778842 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.779235 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mf5gt" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.841401 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm"] Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.935678 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7w4\" (UniqueName: \"kubernetes.io/projected/0cf3c1af-884b-4ec3-b6db-5b975007174b-kube-api-access-pv7w4\") pod \"mariadb-operator-controller-manager-6cbf8cbfc7-hctgm\" (UID: \"0cf3c1af-884b-4ec3-b6db-5b975007174b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.935782 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf3c1af-884b-4ec3-b6db-5b975007174b-webhook-cert\") pod \"mariadb-operator-controller-manager-6cbf8cbfc7-hctgm\" (UID: \"0cf3c1af-884b-4ec3-b6db-5b975007174b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:15 crc kubenswrapper[4931]: I0131 04:38:15.935836 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf3c1af-884b-4ec3-b6db-5b975007174b-apiservice-cert\") pod \"mariadb-operator-controller-manager-6cbf8cbfc7-hctgm\" (UID: \"0cf3c1af-884b-4ec3-b6db-5b975007174b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:16 crc kubenswrapper[4931]: I0131 04:38:16.036946 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7w4\" (UniqueName: \"kubernetes.io/projected/0cf3c1af-884b-4ec3-b6db-5b975007174b-kube-api-access-pv7w4\") pod \"mariadb-operator-controller-manager-6cbf8cbfc7-hctgm\" (UID: \"0cf3c1af-884b-4ec3-b6db-5b975007174b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:16 crc kubenswrapper[4931]: I0131 04:38:16.037309 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf3c1af-884b-4ec3-b6db-5b975007174b-webhook-cert\") pod \"mariadb-operator-controller-manager-6cbf8cbfc7-hctgm\" (UID: \"0cf3c1af-884b-4ec3-b6db-5b975007174b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:16 crc kubenswrapper[4931]: I0131 04:38:16.037413 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf3c1af-884b-4ec3-b6db-5b975007174b-apiservice-cert\") pod \"mariadb-operator-controller-manager-6cbf8cbfc7-hctgm\" (UID: \"0cf3c1af-884b-4ec3-b6db-5b975007174b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:16 crc kubenswrapper[4931]: I0131 04:38:16.046707 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf3c1af-884b-4ec3-b6db-5b975007174b-apiservice-cert\") pod \"mariadb-operator-controller-manager-6cbf8cbfc7-hctgm\" (UID: \"0cf3c1af-884b-4ec3-b6db-5b975007174b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:16 crc kubenswrapper[4931]: I0131 04:38:16.052364 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf3c1af-884b-4ec3-b6db-5b975007174b-webhook-cert\") pod \"mariadb-operator-controller-manager-6cbf8cbfc7-hctgm\" (UID: \"0cf3c1af-884b-4ec3-b6db-5b975007174b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:16 crc kubenswrapper[4931]: I0131 04:38:16.060365 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7w4\" (UniqueName: \"kubernetes.io/projected/0cf3c1af-884b-4ec3-b6db-5b975007174b-kube-api-access-pv7w4\") pod \"mariadb-operator-controller-manager-6cbf8cbfc7-hctgm\" (UID: \"0cf3c1af-884b-4ec3-b6db-5b975007174b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:16 crc kubenswrapper[4931]: I0131 04:38:16.098775 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:16 crc kubenswrapper[4931]: I0131 04:38:16.581245 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm"] Jan 31 04:38:16 crc kubenswrapper[4931]: I0131 04:38:16.970794 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" event={"ID":"0cf3c1af-884b-4ec3-b6db-5b975007174b","Type":"ContainerStarted","Data":"78e99d5eaec4e0c43188b4ede4bcf0a673cc3a3caa6bce1585987746f6e47676"} Jan 31 04:38:20 crc kubenswrapper[4931]: I0131 04:38:20.996376 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" event={"ID":"0cf3c1af-884b-4ec3-b6db-5b975007174b","Type":"ContainerStarted","Data":"015b28fc3b8052d1a018cc98335931ad501b00f03892ca138f5cd91a159feeaf"} Jan 31 04:38:26 crc kubenswrapper[4931]: I0131 04:38:26.822531 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t4g8z"] Jan 31 04:38:26 crc kubenswrapper[4931]: I0131 04:38:26.825006 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:26 crc kubenswrapper[4931]: I0131 04:38:26.833929 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4g8z"] Jan 31 04:38:26 crc kubenswrapper[4931]: I0131 04:38:26.960747 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t92dw\" (UniqueName: \"kubernetes.io/projected/76d300a3-5256-4491-b26f-e8984d386aa6-kube-api-access-t92dw\") pod \"redhat-marketplace-t4g8z\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:26 crc kubenswrapper[4931]: I0131 04:38:26.961107 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-utilities\") pod \"redhat-marketplace-t4g8z\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:26 crc kubenswrapper[4931]: I0131 04:38:26.961194 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-catalog-content\") pod \"redhat-marketplace-t4g8z\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:27 crc kubenswrapper[4931]: I0131 04:38:27.062709 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-catalog-content\") pod \"redhat-marketplace-t4g8z\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:27 crc kubenswrapper[4931]: I0131 04:38:27.062840 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t92dw\" (UniqueName: \"kubernetes.io/projected/76d300a3-5256-4491-b26f-e8984d386aa6-kube-api-access-t92dw\") pod \"redhat-marketplace-t4g8z\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:27 crc kubenswrapper[4931]: I0131 04:38:27.063257 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-catalog-content\") pod \"redhat-marketplace-t4g8z\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:27 crc kubenswrapper[4931]: I0131 04:38:27.063284 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-utilities\") pod \"redhat-marketplace-t4g8z\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:27 crc kubenswrapper[4931]: I0131 04:38:27.063629 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-utilities\") pod \"redhat-marketplace-t4g8z\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:27 crc kubenswrapper[4931]: I0131 04:38:27.085755 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t92dw\" (UniqueName: \"kubernetes.io/projected/76d300a3-5256-4491-b26f-e8984d386aa6-kube-api-access-t92dw\") pod \"redhat-marketplace-t4g8z\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:27 crc kubenswrapper[4931]: I0131 04:38:27.155646 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:27 crc kubenswrapper[4931]: I0131 04:38:27.694862 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4g8z"] Jan 31 04:38:29 crc kubenswrapper[4931]: I0131 04:38:29.056093 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4g8z" event={"ID":"76d300a3-5256-4491-b26f-e8984d386aa6","Type":"ContainerStarted","Data":"c2204c76a20ce3c5696337e27df02aca8b7df9c6ec3bd807004bd7a5b524f899"} Jan 31 04:38:30 crc kubenswrapper[4931]: I0131 04:38:30.069533 4931 generic.go:334] "Generic (PLEG): container finished" podID="76d300a3-5256-4491-b26f-e8984d386aa6" containerID="945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3" exitCode=0 Jan 31 04:38:30 crc kubenswrapper[4931]: I0131 04:38:30.069619 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4g8z" event={"ID":"76d300a3-5256-4491-b26f-e8984d386aa6","Type":"ContainerDied","Data":"945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3"} Jan 31 04:38:30 crc kubenswrapper[4931]: I0131 04:38:30.073192 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" event={"ID":"0cf3c1af-884b-4ec3-b6db-5b975007174b","Type":"ContainerStarted","Data":"b18a4f45e4817783c2d2d8390545443f9891604efe61ac6e84611900923a1333"} Jan 31 04:38:30 crc kubenswrapper[4931]: I0131 04:38:30.073416 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:30 crc kubenswrapper[4931]: I0131 04:38:30.092093 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" Jan 31 04:38:30 crc kubenswrapper[4931]: I0131 04:38:30.132873 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6cbf8cbfc7-hctgm" podStartSLOduration=2.562714424 podStartE2EDuration="15.132851174s" podCreationTimestamp="2026-01-31 04:38:15 +0000 UTC" firstStartedPulling="2026-01-31 04:38:16.598063732 +0000 UTC m=+855.407292606" lastFinishedPulling="2026-01-31 04:38:29.168200492 +0000 UTC m=+867.977429356" observedRunningTime="2026-01-31 04:38:30.128174013 +0000 UTC m=+868.937402887" watchObservedRunningTime="2026-01-31 04:38:30.132851174 +0000 UTC m=+868.942080048" Jan 31 04:38:32 crc kubenswrapper[4931]: I0131 04:38:32.092965 4931 generic.go:334] "Generic (PLEG): container finished" podID="76d300a3-5256-4491-b26f-e8984d386aa6" containerID="e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36" exitCode=0 Jan 31 04:38:32 crc kubenswrapper[4931]: I0131 04:38:32.094908 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4g8z" event={"ID":"76d300a3-5256-4491-b26f-e8984d386aa6","Type":"ContainerDied","Data":"e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36"} Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.100106 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4g8z" event={"ID":"76d300a3-5256-4491-b26f-e8984d386aa6","Type":"ContainerStarted","Data":"3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4"} Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.140116 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t4g8z" podStartSLOduration=4.71046067 podStartE2EDuration="7.140087327s" podCreationTimestamp="2026-01-31 04:38:26 +0000 UTC" firstStartedPulling="2026-01-31 04:38:30.072481615 +0000 UTC m=+868.881710529" lastFinishedPulling="2026-01-31 04:38:32.502108312 +0000 UTC m=+871.311337186" observedRunningTime="2026-01-31 04:38:33.124683876 +0000 UTC m=+871.933912760" watchObservedRunningTime="2026-01-31 04:38:33.140087327 +0000 UTC m=+871.949316211" Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.415309 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-mnnbm"] Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.416296 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-mnnbm" Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.418184 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-djj4q" Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.428774 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-mnnbm"] Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.557288 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbww7\" (UniqueName: \"kubernetes.io/projected/26afb832-6066-4add-8282-b44b23f796b1-kube-api-access-mbww7\") pod \"infra-operator-index-mnnbm\" (UID: \"26afb832-6066-4add-8282-b44b23f796b1\") " pod="openstack-operators/infra-operator-index-mnnbm" Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.659244 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbww7\" (UniqueName: \"kubernetes.io/projected/26afb832-6066-4add-8282-b44b23f796b1-kube-api-access-mbww7\") pod \"infra-operator-index-mnnbm\" (UID: \"26afb832-6066-4add-8282-b44b23f796b1\") " pod="openstack-operators/infra-operator-index-mnnbm" Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.683177 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbww7\" (UniqueName: \"kubernetes.io/projected/26afb832-6066-4add-8282-b44b23f796b1-kube-api-access-mbww7\") pod \"infra-operator-index-mnnbm\" (UID: \"26afb832-6066-4add-8282-b44b23f796b1\") " pod="openstack-operators/infra-operator-index-mnnbm" Jan 31 04:38:33 crc kubenswrapper[4931]: I0131 04:38:33.743462 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-mnnbm" Jan 31 04:38:34 crc kubenswrapper[4931]: I0131 04:38:34.170492 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-mnnbm"] Jan 31 04:38:34 crc kubenswrapper[4931]: W0131 04:38:34.179298 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26afb832_6066_4add_8282_b44b23f796b1.slice/crio-e7de253ea4da54d4c067a9af684836ecf591a45fc28dfc149084fc5b0174c4a2 WatchSource:0}: Error finding container e7de253ea4da54d4c067a9af684836ecf591a45fc28dfc149084fc5b0174c4a2: Status 404 returned error can't find the container with id e7de253ea4da54d4c067a9af684836ecf591a45fc28dfc149084fc5b0174c4a2 Jan 31 04:38:35 crc kubenswrapper[4931]: I0131 04:38:35.126326 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-mnnbm" event={"ID":"26afb832-6066-4add-8282-b44b23f796b1","Type":"ContainerStarted","Data":"e7de253ea4da54d4c067a9af684836ecf591a45fc28dfc149084fc5b0174c4a2"} Jan 31 04:38:36 crc kubenswrapper[4931]: I0131 04:38:36.136568 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-mnnbm" event={"ID":"26afb832-6066-4add-8282-b44b23f796b1","Type":"ContainerStarted","Data":"cd88e94ee804a4a14207fe6dde5763de7254fba024a61c57530fc23616548376"} Jan 31 04:38:36 crc kubenswrapper[4931]: I0131 04:38:36.190338 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-mnnbm" podStartSLOduration=1.8555769 podStartE2EDuration="3.190312893s" podCreationTimestamp="2026-01-31 04:38:33 +0000 UTC" firstStartedPulling="2026-01-31 04:38:34.181813635 +0000 UTC m=+872.991042509" lastFinishedPulling="2026-01-31 04:38:35.516549628 +0000 UTC m=+874.325778502" observedRunningTime="2026-01-31 04:38:36.185685784 +0000 UTC m=+874.994914658" watchObservedRunningTime="2026-01-31 04:38:36.190312893 +0000 UTC m=+874.999541767" Jan 31 04:38:37 crc kubenswrapper[4931]: I0131 04:38:37.155899 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:37 crc kubenswrapper[4931]: I0131 04:38:37.155957 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:37 crc kubenswrapper[4931]: I0131 04:38:37.200890 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:38 crc kubenswrapper[4931]: I0131 04:38:38.213378 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:40 crc kubenswrapper[4931]: I0131 04:38:40.207175 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4g8z"] Jan 31 04:38:40 crc kubenswrapper[4931]: I0131 04:38:40.207391 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t4g8z" podUID="76d300a3-5256-4491-b26f-e8984d386aa6" containerName="registry-server" containerID="cri-o://3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4" gracePeriod=2 Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.178225 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.206127 4931 generic.go:334] "Generic (PLEG): container finished" podID="76d300a3-5256-4491-b26f-e8984d386aa6" containerID="3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4" exitCode=0 Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.206178 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4g8z" event={"ID":"76d300a3-5256-4491-b26f-e8984d386aa6","Type":"ContainerDied","Data":"3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4"} Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.206270 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4g8z" event={"ID":"76d300a3-5256-4491-b26f-e8984d386aa6","Type":"ContainerDied","Data":"c2204c76a20ce3c5696337e27df02aca8b7df9c6ec3bd807004bd7a5b524f899"} Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.206305 4931 scope.go:117] "RemoveContainer" containerID="3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.206929 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4g8z" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.231149 4931 scope.go:117] "RemoveContainer" containerID="e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.253751 4931 scope.go:117] "RemoveContainer" containerID="945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.260817 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-catalog-content\") pod \"76d300a3-5256-4491-b26f-e8984d386aa6\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.260907 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t92dw\" (UniqueName: \"kubernetes.io/projected/76d300a3-5256-4491-b26f-e8984d386aa6-kube-api-access-t92dw\") pod \"76d300a3-5256-4491-b26f-e8984d386aa6\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.260975 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-utilities\") pod \"76d300a3-5256-4491-b26f-e8984d386aa6\" (UID: \"76d300a3-5256-4491-b26f-e8984d386aa6\") " Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.262326 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-utilities" (OuterVolumeSpecName: "utilities") pod "76d300a3-5256-4491-b26f-e8984d386aa6" (UID: "76d300a3-5256-4491-b26f-e8984d386aa6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.266679 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d300a3-5256-4491-b26f-e8984d386aa6-kube-api-access-t92dw" (OuterVolumeSpecName: "kube-api-access-t92dw") pod "76d300a3-5256-4491-b26f-e8984d386aa6" (UID: "76d300a3-5256-4491-b26f-e8984d386aa6"). InnerVolumeSpecName "kube-api-access-t92dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.271971 4931 scope.go:117] "RemoveContainer" containerID="3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4" Jan 31 04:38:41 crc kubenswrapper[4931]: E0131 04:38:41.272534 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4\": container with ID starting with 3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4 not found: ID does not exist" containerID="3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.272571 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4"} err="failed to get container status \"3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4\": rpc error: code = NotFound desc = could not find container \"3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4\": container with ID starting with 3d5fbe857e8e20303f60a7f8626f6d76eee6bdce068ac0d36a9d373700bd42a4 not found: ID does not exist" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.272598 4931 scope.go:117] "RemoveContainer" containerID="e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36" Jan 31 04:38:41 crc kubenswrapper[4931]: E0131 04:38:41.272924 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36\": container with ID starting with e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36 not found: ID does not exist" containerID="e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.272949 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36"} err="failed to get container status \"e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36\": rpc error: code = NotFound desc = could not find container \"e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36\": container with ID starting with e7315da5f2ed8a7eac2705858e0682030c4859175a3c468004d6c6f7f10a2e36 not found: ID does not exist" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.272964 4931 scope.go:117] "RemoveContainer" containerID="945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3" Jan 31 04:38:41 crc kubenswrapper[4931]: E0131 04:38:41.273295 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3\": container with ID starting with 945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3 not found: ID does not exist" containerID="945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.273327 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3"} err="failed to get container status \"945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3\": rpc error: code = NotFound desc = could not find container \"945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3\": container with ID starting with 945207fcaf575765192bc990c304befee192a5ebf0555ea8cd93d78484cb46a3 not found: ID does not exist" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.285379 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76d300a3-5256-4491-b26f-e8984d386aa6" (UID: "76d300a3-5256-4491-b26f-e8984d386aa6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.362857 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.362904 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t92dw\" (UniqueName: \"kubernetes.io/projected/76d300a3-5256-4491-b26f-e8984d386aa6-kube-api-access-t92dw\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.362918 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d300a3-5256-4491-b26f-e8984d386aa6-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.536197 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4g8z"] Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.540688 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4g8z"] Jan 31 04:38:41 crc kubenswrapper[4931]: I0131 04:38:41.909403 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d300a3-5256-4491-b26f-e8984d386aa6" path="/var/lib/kubelet/pods/76d300a3-5256-4491-b26f-e8984d386aa6/volumes" Jan 31 04:38:43 crc kubenswrapper[4931]: I0131 04:38:43.744706 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-mnnbm" Jan 31 04:38:43 crc kubenswrapper[4931]: I0131 04:38:43.745388 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-mnnbm" Jan 31 04:38:43 crc kubenswrapper[4931]: I0131 04:38:43.785260 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-mnnbm" Jan 31 04:38:44 crc kubenswrapper[4931]: I0131 04:38:44.258770 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-mnnbm" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.666079 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l"] Jan 31 04:38:45 crc kubenswrapper[4931]: E0131 04:38:45.666714 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d300a3-5256-4491-b26f-e8984d386aa6" containerName="extract-content" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.666748 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d300a3-5256-4491-b26f-e8984d386aa6" containerName="extract-content" Jan 31 04:38:45 crc kubenswrapper[4931]: E0131 04:38:45.666760 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d300a3-5256-4491-b26f-e8984d386aa6" containerName="registry-server" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.666769 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d300a3-5256-4491-b26f-e8984d386aa6" containerName="registry-server" Jan 31 04:38:45 crc kubenswrapper[4931]: E0131 04:38:45.666793 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d300a3-5256-4491-b26f-e8984d386aa6" containerName="extract-utilities" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.666804 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d300a3-5256-4491-b26f-e8984d386aa6" containerName="extract-utilities" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.666949 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d300a3-5256-4491-b26f-e8984d386aa6" containerName="registry-server" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.667993 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.671028 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rb6qm" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.681393 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l"] Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.830961 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztkl\" (UniqueName: \"kubernetes.io/projected/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-kube-api-access-pztkl\") pod \"676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.831019 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-bundle\") pod \"676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.831043 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-util\") pod \"676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.932297 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pztkl\" (UniqueName: \"kubernetes.io/projected/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-kube-api-access-pztkl\") pod \"676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.932356 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-bundle\") pod \"676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.932381 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-util\") pod \"676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.932964 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-util\") pod \"676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.932967 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-bundle\") pod \"676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.956642 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pztkl\" (UniqueName: \"kubernetes.io/projected/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-kube-api-access-pztkl\") pod \"676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:45 crc kubenswrapper[4931]: I0131 04:38:45.990714 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:46 crc kubenswrapper[4931]: I0131 04:38:46.420906 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l"] Jan 31 04:38:46 crc kubenswrapper[4931]: W0131 04:38:46.424832 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb9b4e4_99a7_469f_b4f7_9cd46b023602.slice/crio-06d00510b922727da0044ded7357f6390228a8979c4978c24d146e1ad4f637e5 WatchSource:0}: Error finding container 06d00510b922727da0044ded7357f6390228a8979c4978c24d146e1ad4f637e5: Status 404 returned error can't find the container with id 06d00510b922727da0044ded7357f6390228a8979c4978c24d146e1ad4f637e5 Jan 31 04:38:47 crc kubenswrapper[4931]: I0131 04:38:47.256682 4931 generic.go:334] "Generic (PLEG): container finished" podID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerID="0193ec10b06b881cba887b30b6ae692c79b411eb96158551dbd7e2cd187d17e6" exitCode=0 Jan 31 04:38:47 crc kubenswrapper[4931]: I0131 04:38:47.256804 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" event={"ID":"2cb9b4e4-99a7-469f-b4f7-9cd46b023602","Type":"ContainerDied","Data":"0193ec10b06b881cba887b30b6ae692c79b411eb96158551dbd7e2cd187d17e6"} Jan 31 04:38:47 crc kubenswrapper[4931]: I0131 04:38:47.257055 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" event={"ID":"2cb9b4e4-99a7-469f-b4f7-9cd46b023602","Type":"ContainerStarted","Data":"06d00510b922727da0044ded7357f6390228a8979c4978c24d146e1ad4f637e5"} Jan 31 04:38:48 crc kubenswrapper[4931]: I0131 04:38:48.268386 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" event={"ID":"2cb9b4e4-99a7-469f-b4f7-9cd46b023602","Type":"ContainerStarted","Data":"307279badc326dd458478ecf832c904b9ecad61e7ad849783aca152a1d3eabea"} Jan 31 04:38:49 crc kubenswrapper[4931]: I0131 04:38:49.275153 4931 generic.go:334] "Generic (PLEG): container finished" podID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerID="307279badc326dd458478ecf832c904b9ecad61e7ad849783aca152a1d3eabea" exitCode=0 Jan 31 04:38:49 crc kubenswrapper[4931]: I0131 04:38:49.275233 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" event={"ID":"2cb9b4e4-99a7-469f-b4f7-9cd46b023602","Type":"ContainerDied","Data":"307279badc326dd458478ecf832c904b9ecad61e7ad849783aca152a1d3eabea"} Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.281859 4931 generic.go:334] "Generic (PLEG): container finished" podID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerID="7cd8eb70fe0b8166133c7ee215d27fc430a8bd17316a32bd839fb2d772784186" exitCode=0 Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.281912 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" event={"ID":"2cb9b4e4-99a7-469f-b4f7-9cd46b023602","Type":"ContainerDied","Data":"7cd8eb70fe0b8166133c7ee215d27fc430a8bd17316a32bd839fb2d772784186"} Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.415515 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ptdc2"] Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.416775 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.433378 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptdc2"] Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.494071 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzblg\" (UniqueName: \"kubernetes.io/projected/3638da9a-35df-4043-801f-33c957a61b61-kube-api-access-rzblg\") pod \"redhat-operators-ptdc2\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.494130 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-catalog-content\") pod \"redhat-operators-ptdc2\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.494425 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-utilities\") pod \"redhat-operators-ptdc2\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.595673 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-utilities\") pod \"redhat-operators-ptdc2\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.595778 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzblg\" (UniqueName: \"kubernetes.io/projected/3638da9a-35df-4043-801f-33c957a61b61-kube-api-access-rzblg\") pod \"redhat-operators-ptdc2\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.595802 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-catalog-content\") pod \"redhat-operators-ptdc2\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.596244 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-utilities\") pod \"redhat-operators-ptdc2\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.596277 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-catalog-content\") pod \"redhat-operators-ptdc2\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.615988 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzblg\" (UniqueName: \"kubernetes.io/projected/3638da9a-35df-4043-801f-33c957a61b61-kube-api-access-rzblg\") pod \"redhat-operators-ptdc2\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:50 crc kubenswrapper[4931]: I0131 04:38:50.732543 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.207885 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptdc2"] Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.292215 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptdc2" event={"ID":"3638da9a-35df-4043-801f-33c957a61b61","Type":"ContainerStarted","Data":"6470841b3a7abe6556e783bde3e6d71ca8057cd0a7cb6f789a2b2a17cf9c1a3a"} Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.483326 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.608508 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-util\") pod \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.608572 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pztkl\" (UniqueName: \"kubernetes.io/projected/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-kube-api-access-pztkl\") pod \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.608676 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-bundle\") pod \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\" (UID: \"2cb9b4e4-99a7-469f-b4f7-9cd46b023602\") " Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.609697 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-bundle" (OuterVolumeSpecName: "bundle") pod "2cb9b4e4-99a7-469f-b4f7-9cd46b023602" (UID: "2cb9b4e4-99a7-469f-b4f7-9cd46b023602"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.615011 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-kube-api-access-pztkl" (OuterVolumeSpecName: "kube-api-access-pztkl") pod "2cb9b4e4-99a7-469f-b4f7-9cd46b023602" (UID: "2cb9b4e4-99a7-469f-b4f7-9cd46b023602"). InnerVolumeSpecName "kube-api-access-pztkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.630645 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-util" (OuterVolumeSpecName: "util") pod "2cb9b4e4-99a7-469f-b4f7-9cd46b023602" (UID: "2cb9b4e4-99a7-469f-b4f7-9cd46b023602"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.709998 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.710037 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:51 crc kubenswrapper[4931]: I0131 04:38:51.710050 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pztkl\" (UniqueName: \"kubernetes.io/projected/2cb9b4e4-99a7-469f-b4f7-9cd46b023602-kube-api-access-pztkl\") on node \"crc\" DevicePath \"\"" Jan 31 04:38:52 crc kubenswrapper[4931]: I0131 04:38:52.298844 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" event={"ID":"2cb9b4e4-99a7-469f-b4f7-9cd46b023602","Type":"ContainerDied","Data":"06d00510b922727da0044ded7357f6390228a8979c4978c24d146e1ad4f637e5"} Jan 31 04:38:52 crc kubenswrapper[4931]: I0131 04:38:52.299172 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d00510b922727da0044ded7357f6390228a8979c4978c24d146e1ad4f637e5" Jan 31 04:38:52 crc kubenswrapper[4931]: I0131 04:38:52.298896 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l" Jan 31 04:38:52 crc kubenswrapper[4931]: I0131 04:38:52.299945 4931 generic.go:334] "Generic (PLEG): container finished" podID="3638da9a-35df-4043-801f-33c957a61b61" containerID="c854206a2de46f57ac49fce3627cb768dd9e4ffb2f3e29737025e0363dd94a60" exitCode=0 Jan 31 04:38:52 crc kubenswrapper[4931]: I0131 04:38:52.299966 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptdc2" event={"ID":"3638da9a-35df-4043-801f-33c957a61b61","Type":"ContainerDied","Data":"c854206a2de46f57ac49fce3627cb768dd9e4ffb2f3e29737025e0363dd94a60"} Jan 31 04:38:53 crc kubenswrapper[4931]: I0131 04:38:53.309748 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptdc2" event={"ID":"3638da9a-35df-4043-801f-33c957a61b61","Type":"ContainerStarted","Data":"bcf497fd4f98d307522bb53d01fb18fd5fbe4bc809246d7bd46386807feaa627"} Jan 31 04:38:54 crc kubenswrapper[4931]: I0131 04:38:54.317314 4931 generic.go:334] "Generic (PLEG): container finished" podID="3638da9a-35df-4043-801f-33c957a61b61" containerID="bcf497fd4f98d307522bb53d01fb18fd5fbe4bc809246d7bd46386807feaa627" exitCode=0 Jan 31 04:38:54 crc kubenswrapper[4931]: I0131 04:38:54.317357 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptdc2" event={"ID":"3638da9a-35df-4043-801f-33c957a61b61","Type":"ContainerDied","Data":"bcf497fd4f98d307522bb53d01fb18fd5fbe4bc809246d7bd46386807feaa627"} Jan 31 04:38:55 crc kubenswrapper[4931]: I0131 04:38:55.333816 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptdc2" event={"ID":"3638da9a-35df-4043-801f-33c957a61b61","Type":"ContainerStarted","Data":"586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d"} Jan 31 04:38:55 crc kubenswrapper[4931]: I0131 04:38:55.363414 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ptdc2" podStartSLOduration=2.738823911 podStartE2EDuration="5.363379599s" podCreationTimestamp="2026-01-31 04:38:50 +0000 UTC" firstStartedPulling="2026-01-31 04:38:52.301316204 +0000 UTC m=+891.110545078" lastFinishedPulling="2026-01-31 04:38:54.925871892 +0000 UTC m=+893.735100766" observedRunningTime="2026-01-31 04:38:55.354083499 +0000 UTC m=+894.163312383" watchObservedRunningTime="2026-01-31 04:38:55.363379599 +0000 UTC m=+894.172608483" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.062512 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j"] Jan 31 04:38:58 crc kubenswrapper[4931]: E0131 04:38:58.063330 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerName="util" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.063349 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerName="util" Jan 31 04:38:58 crc kubenswrapper[4931]: E0131 04:38:58.063361 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerName="pull" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.063368 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerName="pull" Jan 31 04:38:58 crc kubenswrapper[4931]: E0131 04:38:58.063379 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerName="extract" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.063386 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerName="extract" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.063495 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb9b4e4-99a7-469f-b4f7-9cd46b023602" containerName="extract" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.064319 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.074946 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.075098 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hxhx4" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.082311 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j"] Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.103531 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1d715be-f984-4ca8-9ac4-c55f0a5add63-apiservice-cert\") pod \"infra-operator-controller-manager-74f8d9cd6d-zrh6j\" (UID: \"c1d715be-f984-4ca8-9ac4-c55f0a5add63\") " pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.103613 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zv9g\" (UniqueName: \"kubernetes.io/projected/c1d715be-f984-4ca8-9ac4-c55f0a5add63-kube-api-access-5zv9g\") pod \"infra-operator-controller-manager-74f8d9cd6d-zrh6j\" (UID: \"c1d715be-f984-4ca8-9ac4-c55f0a5add63\") " pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.103635 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1d715be-f984-4ca8-9ac4-c55f0a5add63-webhook-cert\") pod \"infra-operator-controller-manager-74f8d9cd6d-zrh6j\" (UID: \"c1d715be-f984-4ca8-9ac4-c55f0a5add63\") " pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.205512 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zv9g\" (UniqueName: \"kubernetes.io/projected/c1d715be-f984-4ca8-9ac4-c55f0a5add63-kube-api-access-5zv9g\") pod \"infra-operator-controller-manager-74f8d9cd6d-zrh6j\" (UID: \"c1d715be-f984-4ca8-9ac4-c55f0a5add63\") " pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.205581 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1d715be-f984-4ca8-9ac4-c55f0a5add63-webhook-cert\") pod \"infra-operator-controller-manager-74f8d9cd6d-zrh6j\" (UID: \"c1d715be-f984-4ca8-9ac4-c55f0a5add63\") " pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.205669 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1d715be-f984-4ca8-9ac4-c55f0a5add63-apiservice-cert\") pod \"infra-operator-controller-manager-74f8d9cd6d-zrh6j\" (UID: \"c1d715be-f984-4ca8-9ac4-c55f0a5add63\") " pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.228022 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1d715be-f984-4ca8-9ac4-c55f0a5add63-webhook-cert\") pod \"infra-operator-controller-manager-74f8d9cd6d-zrh6j\" (UID: \"c1d715be-f984-4ca8-9ac4-c55f0a5add63\") " pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.228339 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1d715be-f984-4ca8-9ac4-c55f0a5add63-apiservice-cert\") pod \"infra-operator-controller-manager-74f8d9cd6d-zrh6j\" (UID: \"c1d715be-f984-4ca8-9ac4-c55f0a5add63\") " pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.229339 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zv9g\" (UniqueName: \"kubernetes.io/projected/c1d715be-f984-4ca8-9ac4-c55f0a5add63-kube-api-access-5zv9g\") pod \"infra-operator-controller-manager-74f8d9cd6d-zrh6j\" (UID: \"c1d715be-f984-4ca8-9ac4-c55f0a5add63\") " pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.387399 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:38:58 crc kubenswrapper[4931]: I0131 04:38:58.689987 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j"] Jan 31 04:38:59 crc kubenswrapper[4931]: I0131 04:38:59.358353 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" event={"ID":"c1d715be-f984-4ca8-9ac4-c55f0a5add63","Type":"ContainerStarted","Data":"154f3f70d6bbd7f07d10d8e42403f2a25e45553000eeecf4a7f06ad2f0d40b85"} Jan 31 04:39:00 crc kubenswrapper[4931]: I0131 04:39:00.733202 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:39:00 crc kubenswrapper[4931]: I0131 04:39:00.733609 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:39:01 crc kubenswrapper[4931]: I0131 04:39:01.784703 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ptdc2" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="registry-server" probeResult="failure" output=< Jan 31 04:39:01 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 31 04:39:01 crc kubenswrapper[4931]: > Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.283525 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.284764 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.293267 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"kube-root-ca.crt" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.294205 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config-data" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.294371 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"galera-openstack-dockercfg-wcvx6" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.298046 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openshift-service-ca.crt" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.298392 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.299554 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.300417 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.302895 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"osp-secret" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.308394 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.324688 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.326195 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.352377 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.362936 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390181 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7186a32-8d8b-433c-b191-86787137c1d1-config-data-default\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390223 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5jz9\" (UniqueName: \"kubernetes.io/projected/2ee0bbbf-b9b8-408a-9c09-8b1655718106-kube-api-access-g5jz9\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390243 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfrnl\" (UniqueName: \"kubernetes.io/projected/f7186a32-8d8b-433c-b191-86787137c1d1-kube-api-access-pfrnl\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390271 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/f7186a32-8d8b-433c-b191-86787137c1d1-secrets\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390297 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390389 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8c4k\" (UniqueName: \"kubernetes.io/projected/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-kube-api-access-q8c4k\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390415 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/2ee0bbbf-b9b8-408a-9c09-8b1655718106-secrets\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390439 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7186a32-8d8b-433c-b191-86787137c1d1-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390453 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390497 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-config-data-default\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390538 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390558 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390578 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-secrets\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390597 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ee0bbbf-b9b8-408a-9c09-8b1655718106-config-data-generated\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390616 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ee0bbbf-b9b8-408a-9c09-8b1655718106-operator-scripts\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390699 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-kolla-config\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390951 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ee0bbbf-b9b8-408a-9c09-8b1655718106-config-data-default\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.390996 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.391071 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7186a32-8d8b-433c-b191-86787137c1d1-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.391090 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7186a32-8d8b-433c-b191-86787137c1d1-kolla-config\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.391110 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ee0bbbf-b9b8-408a-9c09-8b1655718106-kolla-config\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492671 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-secrets\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492749 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ee0bbbf-b9b8-408a-9c09-8b1655718106-config-data-generated\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492780 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ee0bbbf-b9b8-408a-9c09-8b1655718106-operator-scripts\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492805 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-kolla-config\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492836 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ee0bbbf-b9b8-408a-9c09-8b1655718106-config-data-default\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492863 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492893 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7186a32-8d8b-433c-b191-86787137c1d1-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492911 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7186a32-8d8b-433c-b191-86787137c1d1-kolla-config\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492935 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ee0bbbf-b9b8-408a-9c09-8b1655718106-kolla-config\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492962 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7186a32-8d8b-433c-b191-86787137c1d1-config-data-default\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.492987 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5jz9\" (UniqueName: \"kubernetes.io/projected/2ee0bbbf-b9b8-408a-9c09-8b1655718106-kube-api-access-g5jz9\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493006 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfrnl\" (UniqueName: \"kubernetes.io/projected/f7186a32-8d8b-433c-b191-86787137c1d1-kube-api-access-pfrnl\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493031 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/f7186a32-8d8b-433c-b191-86787137c1d1-secrets\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493051 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493069 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8c4k\" (UniqueName: \"kubernetes.io/projected/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-kube-api-access-q8c4k\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493093 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/2ee0bbbf-b9b8-408a-9c09-8b1655718106-secrets\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493119 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7186a32-8d8b-433c-b191-86787137c1d1-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493141 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493155 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ee0bbbf-b9b8-408a-9c09-8b1655718106-config-data-generated\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493162 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-config-data-default\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493212 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493231 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493559 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493918 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ee0bbbf-b9b8-408a-9c09-8b1655718106-operator-scripts\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.493941 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-config-data-default\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.494654 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-kolla-config\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.494923 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.494959 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") device mount path \"/mnt/openstack/pv04\"" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.498330 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ee0bbbf-b9b8-408a-9c09-8b1655718106-config-data-default\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.498356 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ee0bbbf-b9b8-408a-9c09-8b1655718106-kolla-config\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.499090 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.506002 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7186a32-8d8b-433c-b191-86787137c1d1-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.507043 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7186a32-8d8b-433c-b191-86787137c1d1-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.508144 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.508704 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7186a32-8d8b-433c-b191-86787137c1d1-config-data-default\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.511907 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7186a32-8d8b-433c-b191-86787137c1d1-kolla-config\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.514419 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/f7186a32-8d8b-433c-b191-86787137c1d1-secrets\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.514605 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfrnl\" (UniqueName: \"kubernetes.io/projected/f7186a32-8d8b-433c-b191-86787137c1d1-kube-api-access-pfrnl\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.514617 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5jz9\" (UniqueName: \"kubernetes.io/projected/2ee0bbbf-b9b8-408a-9c09-8b1655718106-kube-api-access-g5jz9\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.515018 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/2ee0bbbf-b9b8-408a-9c09-8b1655718106-secrets\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.517020 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-secrets\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.517973 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-1\" (UID: \"2ee0bbbf-b9b8-408a-9c09-8b1655718106\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.518126 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.528199 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-2\" (UID: \"f7186a32-8d8b-433c-b191-86787137c1d1\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.532423 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8c4k\" (UniqueName: \"kubernetes.io/projected/4c4e50a2-14c9-4128-b467-67e66bd4b0ed-kube-api-access-q8c4k\") pod \"openstack-galera-0\" (UID: \"4c4e50a2-14c9-4128-b467-67e66bd4b0ed\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.615568 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.633760 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.645616 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:02 crc kubenswrapper[4931]: I0131 04:39:02.980145 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 04:39:03 crc kubenswrapper[4931]: W0131 04:39:03.002031 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c4e50a2_14c9_4128_b467_67e66bd4b0ed.slice/crio-184664647bbdabf26fe63b13b63179fd1f2523d044fe13288c9dea286aadcc46 WatchSource:0}: Error finding container 184664647bbdabf26fe63b13b63179fd1f2523d044fe13288c9dea286aadcc46: Status 404 returned error can't find the container with id 184664647bbdabf26fe63b13b63179fd1f2523d044fe13288c9dea286aadcc46 Jan 31 04:39:03 crc kubenswrapper[4931]: I0131 04:39:03.070810 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 04:39:03 crc kubenswrapper[4931]: W0131 04:39:03.072787 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ee0bbbf_b9b8_408a_9c09_8b1655718106.slice/crio-602d80de29702295861eeae7948fff7219fe16a1263ac1471a8d1bcf85c90180 WatchSource:0}: Error finding container 602d80de29702295861eeae7948fff7219fe16a1263ac1471a8d1bcf85c90180: Status 404 returned error can't find the container with id 602d80de29702295861eeae7948fff7219fe16a1263ac1471a8d1bcf85c90180 Jan 31 04:39:03 crc kubenswrapper[4931]: I0131 04:39:03.120539 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 04:39:03 crc kubenswrapper[4931]: I0131 04:39:03.382430 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"2ee0bbbf-b9b8-408a-9c09-8b1655718106","Type":"ContainerStarted","Data":"602d80de29702295861eeae7948fff7219fe16a1263ac1471a8d1bcf85c90180"} Jan 31 04:39:03 crc kubenswrapper[4931]: I0131 04:39:03.384139 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" event={"ID":"c1d715be-f984-4ca8-9ac4-c55f0a5add63","Type":"ContainerStarted","Data":"7e2e305793b7410df5699517dd9897cb458284ff5a13a19f3355725c927f45a8"} Jan 31 04:39:03 crc kubenswrapper[4931]: I0131 04:39:03.385207 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"f7186a32-8d8b-433c-b191-86787137c1d1","Type":"ContainerStarted","Data":"45f46a77e0fe156848481f44511f73ecd86192a2ea1238b3bdfa688927fe6dd1"} Jan 31 04:39:03 crc kubenswrapper[4931]: I0131 04:39:03.386059 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"4c4e50a2-14c9-4128-b467-67e66bd4b0ed","Type":"ContainerStarted","Data":"184664647bbdabf26fe63b13b63179fd1f2523d044fe13288c9dea286aadcc46"} Jan 31 04:39:04 crc kubenswrapper[4931]: I0131 04:39:04.406268 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" event={"ID":"c1d715be-f984-4ca8-9ac4-c55f0a5add63","Type":"ContainerStarted","Data":"ff378b3d24e862bc6b2ffa4523b6c608dfe9ae306b762769afa2915e132ee318"} Jan 31 04:39:04 crc kubenswrapper[4931]: I0131 04:39:04.408344 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:39:04 crc kubenswrapper[4931]: I0131 04:39:04.433274 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" podStartSLOduration=2.304745771 podStartE2EDuration="6.433253377s" podCreationTimestamp="2026-01-31 04:38:58 +0000 UTC" firstStartedPulling="2026-01-31 04:38:58.702366042 +0000 UTC m=+897.511594916" lastFinishedPulling="2026-01-31 04:39:02.830873648 +0000 UTC m=+901.640102522" observedRunningTime="2026-01-31 04:39:04.429230284 +0000 UTC m=+903.238459158" watchObservedRunningTime="2026-01-31 04:39:04.433253377 +0000 UTC m=+903.242482251" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.023157 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-25phq"] Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.024758 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.082356 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-25phq"] Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.130043 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-utilities\") pod \"certified-operators-25phq\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.130122 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-catalog-content\") pod \"certified-operators-25phq\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.130175 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkhdw\" (UniqueName: \"kubernetes.io/projected/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-kube-api-access-xkhdw\") pod \"certified-operators-25phq\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.231482 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-utilities\") pod \"certified-operators-25phq\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.231548 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-catalog-content\") pod \"certified-operators-25phq\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.231587 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkhdw\" (UniqueName: \"kubernetes.io/projected/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-kube-api-access-xkhdw\") pod \"certified-operators-25phq\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.232221 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-utilities\") pod \"certified-operators-25phq\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.234645 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-catalog-content\") pod \"certified-operators-25phq\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.279022 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkhdw\" (UniqueName: \"kubernetes.io/projected/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-kube-api-access-xkhdw\") pod \"certified-operators-25phq\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.396111 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:05 crc kubenswrapper[4931]: I0131 04:39:05.945830 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-25phq"] Jan 31 04:39:06 crc kubenswrapper[4931]: I0131 04:39:06.431915 4931 generic.go:334] "Generic (PLEG): container finished" podID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerID="aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d" exitCode=0 Jan 31 04:39:06 crc kubenswrapper[4931]: I0131 04:39:06.431961 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25phq" event={"ID":"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd","Type":"ContainerDied","Data":"aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d"} Jan 31 04:39:06 crc kubenswrapper[4931]: I0131 04:39:06.431988 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25phq" event={"ID":"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd","Type":"ContainerStarted","Data":"b29dcda24af1cc261643fb27be5752875e071a26f10cd2bdd883092359b21c98"} Jan 31 04:39:07 crc kubenswrapper[4931]: I0131 04:39:07.445569 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25phq" event={"ID":"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd","Type":"ContainerStarted","Data":"16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220"} Jan 31 04:39:08 crc kubenswrapper[4931]: I0131 04:39:08.392588 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-74f8d9cd6d-zrh6j" Jan 31 04:39:08 crc kubenswrapper[4931]: I0131 04:39:08.461682 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25phq" event={"ID":"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd","Type":"ContainerDied","Data":"16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220"} Jan 31 04:39:08 crc kubenswrapper[4931]: I0131 04:39:08.461583 4931 generic.go:334] "Generic (PLEG): container finished" podID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerID="16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220" exitCode=0 Jan 31 04:39:10 crc kubenswrapper[4931]: I0131 04:39:10.777533 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:39:10 crc kubenswrapper[4931]: I0131 04:39:10.815553 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:39:15 crc kubenswrapper[4931]: I0131 04:39:15.405545 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptdc2"] Jan 31 04:39:15 crc kubenswrapper[4931]: I0131 04:39:15.406321 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ptdc2" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="registry-server" containerID="cri-o://586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d" gracePeriod=2 Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.390094 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.391172 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.393846 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"memcached-config-data" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.394923 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"memcached-memcached-dockercfg-bllkg" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.404881 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.509435 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e70e69a-ca63-4885-91ee-92f55b9c3c5c-kolla-config\") pod \"memcached-0\" (UID: \"6e70e69a-ca63-4885-91ee-92f55b9c3c5c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.509622 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e70e69a-ca63-4885-91ee-92f55b9c3c5c-config-data\") pod \"memcached-0\" (UID: \"6e70e69a-ca63-4885-91ee-92f55b9c3c5c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.509688 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2tkn\" (UniqueName: \"kubernetes.io/projected/6e70e69a-ca63-4885-91ee-92f55b9c3c5c-kube-api-access-f2tkn\") pod \"memcached-0\" (UID: \"6e70e69a-ca63-4885-91ee-92f55b9c3c5c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.514903 4931 generic.go:334] "Generic (PLEG): container finished" podID="3638da9a-35df-4043-801f-33c957a61b61" containerID="586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d" exitCode=0 Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.514944 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptdc2" event={"ID":"3638da9a-35df-4043-801f-33c957a61b61","Type":"ContainerDied","Data":"586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d"} Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.617549 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e70e69a-ca63-4885-91ee-92f55b9c3c5c-kolla-config\") pod \"memcached-0\" (UID: \"6e70e69a-ca63-4885-91ee-92f55b9c3c5c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.617623 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e70e69a-ca63-4885-91ee-92f55b9c3c5c-config-data\") pod \"memcached-0\" (UID: \"6e70e69a-ca63-4885-91ee-92f55b9c3c5c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.617691 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2tkn\" (UniqueName: \"kubernetes.io/projected/6e70e69a-ca63-4885-91ee-92f55b9c3c5c-kube-api-access-f2tkn\") pod \"memcached-0\" (UID: \"6e70e69a-ca63-4885-91ee-92f55b9c3c5c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.619970 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e70e69a-ca63-4885-91ee-92f55b9c3c5c-kolla-config\") pod \"memcached-0\" (UID: \"6e70e69a-ca63-4885-91ee-92f55b9c3c5c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.620039 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e70e69a-ca63-4885-91ee-92f55b9c3c5c-config-data\") pod \"memcached-0\" (UID: \"6e70e69a-ca63-4885-91ee-92f55b9c3c5c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.642933 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2tkn\" (UniqueName: \"kubernetes.io/projected/6e70e69a-ca63-4885-91ee-92f55b9c3c5c-kube-api-access-f2tkn\") pod \"memcached-0\" (UID: \"6e70e69a-ca63-4885-91ee-92f55b9c3c5c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:16 crc kubenswrapper[4931]: I0131 04:39:16.711852 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:19 crc kubenswrapper[4931]: I0131 04:39:19.021291 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-55xlj"] Jan 31 04:39:19 crc kubenswrapper[4931]: I0131 04:39:19.022092 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" Jan 31 04:39:19 crc kubenswrapper[4931]: I0131 04:39:19.024615 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-4hjxh" Jan 31 04:39:19 crc kubenswrapper[4931]: I0131 04:39:19.036389 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-55xlj"] Jan 31 04:39:19 crc kubenswrapper[4931]: I0131 04:39:19.071524 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qjrv\" (UniqueName: \"kubernetes.io/projected/7974a9c9-4c6f-4588-9674-92ab3c2a28ca-kube-api-access-7qjrv\") pod \"rabbitmq-cluster-operator-index-55xlj\" (UID: \"7974a9c9-4c6f-4588-9674-92ab3c2a28ca\") " pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" Jan 31 04:39:19 crc kubenswrapper[4931]: I0131 04:39:19.172881 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qjrv\" (UniqueName: \"kubernetes.io/projected/7974a9c9-4c6f-4588-9674-92ab3c2a28ca-kube-api-access-7qjrv\") pod \"rabbitmq-cluster-operator-index-55xlj\" (UID: \"7974a9c9-4c6f-4588-9674-92ab3c2a28ca\") " pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" Jan 31 04:39:19 crc kubenswrapper[4931]: I0131 04:39:19.195019 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qjrv\" (UniqueName: \"kubernetes.io/projected/7974a9c9-4c6f-4588-9674-92ab3c2a28ca-kube-api-access-7qjrv\") pod \"rabbitmq-cluster-operator-index-55xlj\" (UID: \"7974a9c9-4c6f-4588-9674-92ab3c2a28ca\") " pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" Jan 31 04:39:19 crc kubenswrapper[4931]: I0131 04:39:19.338828 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" Jan 31 04:39:20 crc kubenswrapper[4931]: E0131 04:39:20.733748 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d is running failed: container process not found" containerID="586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 04:39:20 crc kubenswrapper[4931]: E0131 04:39:20.734190 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d is running failed: container process not found" containerID="586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 04:39:20 crc kubenswrapper[4931]: E0131 04:39:20.734472 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d is running failed: container process not found" containerID="586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 04:39:20 crc kubenswrapper[4931]: E0131 04:39:20.734502 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-ptdc2" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="registry-server" Jan 31 04:39:21 crc kubenswrapper[4931]: I0131 04:39:21.133464 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:39:21 crc kubenswrapper[4931]: I0131 04:39:21.133532 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.347055 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.432340 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-catalog-content\") pod \"3638da9a-35df-4043-801f-33c957a61b61\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.432404 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-utilities\") pod \"3638da9a-35df-4043-801f-33c957a61b61\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.432515 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzblg\" (UniqueName: \"kubernetes.io/projected/3638da9a-35df-4043-801f-33c957a61b61-kube-api-access-rzblg\") pod \"3638da9a-35df-4043-801f-33c957a61b61\" (UID: \"3638da9a-35df-4043-801f-33c957a61b61\") " Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.436039 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-utilities" (OuterVolumeSpecName: "utilities") pod "3638da9a-35df-4043-801f-33c957a61b61" (UID: "3638da9a-35df-4043-801f-33c957a61b61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.439159 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3638da9a-35df-4043-801f-33c957a61b61-kube-api-access-rzblg" (OuterVolumeSpecName: "kube-api-access-rzblg") pod "3638da9a-35df-4043-801f-33c957a61b61" (UID: "3638da9a-35df-4043-801f-33c957a61b61"). InnerVolumeSpecName "kube-api-access-rzblg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.490597 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-55xlj"] Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.496929 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 04:39:22 crc kubenswrapper[4931]: W0131 04:39:22.500713 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7974a9c9_4c6f_4588_9674_92ab3c2a28ca.slice/crio-b057bab2ac9e0df2a8aaf917671d011574c246d8a435543845f94edb1cbc4666 WatchSource:0}: Error finding container b057bab2ac9e0df2a8aaf917671d011574c246d8a435543845f94edb1cbc4666: Status 404 returned error can't find the container with id b057bab2ac9e0df2a8aaf917671d011574c246d8a435543845f94edb1cbc4666 Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.534170 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzblg\" (UniqueName: \"kubernetes.io/projected/3638da9a-35df-4043-801f-33c957a61b61-kube-api-access-rzblg\") on node \"crc\" DevicePath \"\"" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.534393 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.554935 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptdc2" event={"ID":"3638da9a-35df-4043-801f-33c957a61b61","Type":"ContainerDied","Data":"6470841b3a7abe6556e783bde3e6d71ca8057cd0a7cb6f789a2b2a17cf9c1a3a"} Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.555025 4931 scope.go:117] "RemoveContainer" containerID="586662b44d6ee4e61b45f9eedefbb79a33874c9572a204b47fd08b614333617d" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.555360 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptdc2" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.556370 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" event={"ID":"7974a9c9-4c6f-4588-9674-92ab3c2a28ca","Type":"ContainerStarted","Data":"b057bab2ac9e0df2a8aaf917671d011574c246d8a435543845f94edb1cbc4666"} Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.557476 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"6e70e69a-ca63-4885-91ee-92f55b9c3c5c","Type":"ContainerStarted","Data":"ede8274d596266c6672d3a61eb7fd490cd84b84f998a5c3292b9044d3d736fea"} Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.572712 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3638da9a-35df-4043-801f-33c957a61b61" (UID: "3638da9a-35df-4043-801f-33c957a61b61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.574048 4931 scope.go:117] "RemoveContainer" containerID="bcf497fd4f98d307522bb53d01fb18fd5fbe4bc809246d7bd46386807feaa627" Jan 31 04:39:22 crc kubenswrapper[4931]: E0131 04:39:22.578602 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 31 04:39:22 crc kubenswrapper[4931]: E0131 04:39:22.578845 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5jz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-1_glance-kuttl-tests(2ee0bbbf-b9b8-408a-9c09-8b1655718106): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:39:22 crc kubenswrapper[4931]: E0131 04:39:22.580191 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="glance-kuttl-tests/openstack-galera-1" podUID="2ee0bbbf-b9b8-408a-9c09-8b1655718106" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.594917 4931 scope.go:117] "RemoveContainer" containerID="c854206a2de46f57ac49fce3627cb768dd9e4ffb2f3e29737025e0363dd94a60" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.636689 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3638da9a-35df-4043-801f-33c957a61b61-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.881495 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptdc2"] Jan 31 04:39:22 crc kubenswrapper[4931]: I0131 04:39:22.889374 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ptdc2"] Jan 31 04:39:23 crc kubenswrapper[4931]: E0131 04:39:23.605888 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="glance-kuttl-tests/openstack-galera-1" podUID="2ee0bbbf-b9b8-408a-9c09-8b1655718106" Jan 31 04:39:23 crc kubenswrapper[4931]: E0131 04:39:23.625751 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 31 04:39:23 crc kubenswrapper[4931]: E0131 04:39:23.625913 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pfrnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-2_glance-kuttl-tests(f7186a32-8d8b-433c-b191-86787137c1d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:39:23 crc kubenswrapper[4931]: E0131 04:39:23.627260 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="glance-kuttl-tests/openstack-galera-2" podUID="f7186a32-8d8b-433c-b191-86787137c1d1" Jan 31 04:39:23 crc kubenswrapper[4931]: E0131 04:39:23.841733 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 31 04:39:23 crc kubenswrapper[4931]: E0131 04:39:23.842226 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8c4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_glance-kuttl-tests(4c4e50a2-14c9-4128-b467-67e66bd4b0ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:39:23 crc kubenswrapper[4931]: E0131 04:39:23.843382 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="glance-kuttl-tests/openstack-galera-0" podUID="4c4e50a2-14c9-4128-b467-67e66bd4b0ed" Jan 31 04:39:23 crc kubenswrapper[4931]: I0131 04:39:23.903473 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3638da9a-35df-4043-801f-33c957a61b61" path="/var/lib/kubelet/pods/3638da9a-35df-4043-801f-33c957a61b61/volumes" Jan 31 04:39:24 crc kubenswrapper[4931]: I0131 04:39:24.573031 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25phq" event={"ID":"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd","Type":"ContainerStarted","Data":"f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09"} Jan 31 04:39:24 crc kubenswrapper[4931]: E0131 04:39:24.574617 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="glance-kuttl-tests/openstack-galera-2" podUID="f7186a32-8d8b-433c-b191-86787137c1d1" Jan 31 04:39:24 crc kubenswrapper[4931]: E0131 04:39:24.574837 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="glance-kuttl-tests/openstack-galera-0" podUID="4c4e50a2-14c9-4128-b467-67e66bd4b0ed" Jan 31 04:39:24 crc kubenswrapper[4931]: I0131 04:39:24.676950 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-25phq" podStartSLOduration=2.42318222 podStartE2EDuration="19.676911876s" podCreationTimestamp="2026-01-31 04:39:05 +0000 UTC" firstStartedPulling="2026-01-31 04:39:06.437369854 +0000 UTC m=+905.246598738" lastFinishedPulling="2026-01-31 04:39:23.69109952 +0000 UTC m=+922.500328394" observedRunningTime="2026-01-31 04:39:24.660441904 +0000 UTC m=+923.469670778" watchObservedRunningTime="2026-01-31 04:39:24.676911876 +0000 UTC m=+923.486140750" Jan 31 04:39:25 crc kubenswrapper[4931]: I0131 04:39:25.397126 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:25 crc kubenswrapper[4931]: I0131 04:39:25.398928 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:26 crc kubenswrapper[4931]: I0131 04:39:26.450169 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-25phq" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerName="registry-server" probeResult="failure" output=< Jan 31 04:39:26 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 31 04:39:26 crc kubenswrapper[4931]: > Jan 31 04:39:28 crc kubenswrapper[4931]: I0131 04:39:28.608004 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"6e70e69a-ca63-4885-91ee-92f55b9c3c5c","Type":"ContainerStarted","Data":"678a19b0be49bede7c16969eb63219ba01b678d74fd24615a144dd35f7844a44"} Jan 31 04:39:28 crc kubenswrapper[4931]: I0131 04:39:28.608797 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:28 crc kubenswrapper[4931]: I0131 04:39:28.610821 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" event={"ID":"7974a9c9-4c6f-4588-9674-92ab3c2a28ca","Type":"ContainerStarted","Data":"f12143145dba744723f623682e5aa8399c905015154080ddd4771562e7e11d21"} Jan 31 04:39:28 crc kubenswrapper[4931]: I0131 04:39:28.639851 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/memcached-0" podStartSLOduration=8.130360695 podStartE2EDuration="12.639815838s" podCreationTimestamp="2026-01-31 04:39:16 +0000 UTC" firstStartedPulling="2026-01-31 04:39:22.504180451 +0000 UTC m=+921.313409325" lastFinishedPulling="2026-01-31 04:39:27.013635604 +0000 UTC m=+925.822864468" observedRunningTime="2026-01-31 04:39:28.633204733 +0000 UTC m=+927.442433607" watchObservedRunningTime="2026-01-31 04:39:28.639815838 +0000 UTC m=+927.449044722" Jan 31 04:39:28 crc kubenswrapper[4931]: I0131 04:39:28.662862 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" podStartSLOduration=4.144220901 podStartE2EDuration="9.662837363s" podCreationTimestamp="2026-01-31 04:39:19 +0000 UTC" firstStartedPulling="2026-01-31 04:39:22.503673027 +0000 UTC m=+921.312901901" lastFinishedPulling="2026-01-31 04:39:28.022289479 +0000 UTC m=+926.831518363" observedRunningTime="2026-01-31 04:39:28.654246692 +0000 UTC m=+927.463475596" watchObservedRunningTime="2026-01-31 04:39:28.662837363 +0000 UTC m=+927.472066237" Jan 31 04:39:29 crc kubenswrapper[4931]: I0131 04:39:29.339668 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" Jan 31 04:39:29 crc kubenswrapper[4931]: I0131 04:39:29.339749 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" Jan 31 04:39:29 crc kubenswrapper[4931]: I0131 04:39:29.368574 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" Jan 31 04:39:35 crc kubenswrapper[4931]: I0131 04:39:35.442704 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:35 crc kubenswrapper[4931]: I0131 04:39:35.489748 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:36 crc kubenswrapper[4931]: I0131 04:39:36.713184 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/memcached-0" Jan 31 04:39:37 crc kubenswrapper[4931]: I0131 04:39:37.671758 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"2ee0bbbf-b9b8-408a-9c09-8b1655718106","Type":"ContainerStarted","Data":"26fda1ef1e1a5a294d85da32f69bc5384b76af09a140456cdee802d12df5449c"} Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.007975 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-25phq"] Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.008256 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-25phq" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerName="registry-server" containerID="cri-o://f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09" gracePeriod=2 Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.368613 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-55xlj" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.426832 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.514307 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-catalog-content\") pod \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.514409 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkhdw\" (UniqueName: \"kubernetes.io/projected/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-kube-api-access-xkhdw\") pod \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.514456 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-utilities\") pod \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\" (UID: \"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd\") " Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.515254 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-utilities" (OuterVolumeSpecName: "utilities") pod "e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" (UID: "e460b23a-89cd-4d07-8c4f-31e13a8dbbdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.519612 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-kube-api-access-xkhdw" (OuterVolumeSpecName: "kube-api-access-xkhdw") pod "e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" (UID: "e460b23a-89cd-4d07-8c4f-31e13a8dbbdd"). InnerVolumeSpecName "kube-api-access-xkhdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.581000 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" (UID: "e460b23a-89cd-4d07-8c4f-31e13a8dbbdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.616380 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkhdw\" (UniqueName: \"kubernetes.io/projected/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-kube-api-access-xkhdw\") on node \"crc\" DevicePath \"\"" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.616418 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.616428 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.686540 4931 generic.go:334] "Generic (PLEG): container finished" podID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerID="f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09" exitCode=0 Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.686620 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25phq" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.686622 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25phq" event={"ID":"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd","Type":"ContainerDied","Data":"f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09"} Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.686781 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25phq" event={"ID":"e460b23a-89cd-4d07-8c4f-31e13a8dbbdd","Type":"ContainerDied","Data":"b29dcda24af1cc261643fb27be5752875e071a26f10cd2bdd883092359b21c98"} Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.686818 4931 scope.go:117] "RemoveContainer" containerID="f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.688397 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"f7186a32-8d8b-433c-b191-86787137c1d1","Type":"ContainerStarted","Data":"59ae68400042c0752360d415556a576607084832c30417aa8a6c99181ae49214"} Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.704622 4931 scope.go:117] "RemoveContainer" containerID="16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.727762 4931 scope.go:117] "RemoveContainer" containerID="aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.732892 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-25phq"] Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.746354 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-25phq"] Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.750224 4931 scope.go:117] "RemoveContainer" containerID="f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09" Jan 31 04:39:39 crc kubenswrapper[4931]: E0131 04:39:39.750735 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09\": container with ID starting with f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09 not found: ID does not exist" containerID="f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.750847 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09"} err="failed to get container status \"f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09\": rpc error: code = NotFound desc = could not find container \"f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09\": container with ID starting with f99e68ee00175bef25e4f97fdcc6c8ea55f1f9e9ce709d833f66f0d0a6123e09 not found: ID does not exist" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.750945 4931 scope.go:117] "RemoveContainer" containerID="16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220" Jan 31 04:39:39 crc kubenswrapper[4931]: E0131 04:39:39.751453 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220\": container with ID starting with 16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220 not found: ID does not exist" containerID="16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.751522 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220"} err="failed to get container status \"16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220\": rpc error: code = NotFound desc = could not find container \"16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220\": container with ID starting with 16cd002450f7751d864b5654c8844c52131db7885451c0b24e20a4923fa23220 not found: ID does not exist" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.751552 4931 scope.go:117] "RemoveContainer" containerID="aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d" Jan 31 04:39:39 crc kubenswrapper[4931]: E0131 04:39:39.751869 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d\": container with ID starting with aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d not found: ID does not exist" containerID="aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.751943 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d"} err="failed to get container status \"aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d\": rpc error: code = NotFound desc = could not find container \"aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d\": container with ID starting with aac082823cd56a2e0a875bfeabcfed656dc4d1238af4bef5fcb85f74a489d51d not found: ID does not exist" Jan 31 04:39:39 crc kubenswrapper[4931]: I0131 04:39:39.911960 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" path="/var/lib/kubelet/pods/e460b23a-89cd-4d07-8c4f-31e13a8dbbdd/volumes" Jan 31 04:39:40 crc kubenswrapper[4931]: I0131 04:39:40.699175 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"4c4e50a2-14c9-4128-b467-67e66bd4b0ed","Type":"ContainerStarted","Data":"fafc025dc6a4285e91ad6b6ca135fcce33d215bf0f3249d8c281698c37a7d41c"} Jan 31 04:39:41 crc kubenswrapper[4931]: I0131 04:39:41.706575 4931 generic.go:334] "Generic (PLEG): container finished" podID="2ee0bbbf-b9b8-408a-9c09-8b1655718106" containerID="26fda1ef1e1a5a294d85da32f69bc5384b76af09a140456cdee802d12df5449c" exitCode=0 Jan 31 04:39:41 crc kubenswrapper[4931]: I0131 04:39:41.706662 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"2ee0bbbf-b9b8-408a-9c09-8b1655718106","Type":"ContainerDied","Data":"26fda1ef1e1a5a294d85da32f69bc5384b76af09a140456cdee802d12df5449c"} Jan 31 04:39:42 crc kubenswrapper[4931]: I0131 04:39:42.721849 4931 generic.go:334] "Generic (PLEG): container finished" podID="f7186a32-8d8b-433c-b191-86787137c1d1" containerID="59ae68400042c0752360d415556a576607084832c30417aa8a6c99181ae49214" exitCode=0 Jan 31 04:39:42 crc kubenswrapper[4931]: I0131 04:39:42.722083 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"f7186a32-8d8b-433c-b191-86787137c1d1","Type":"ContainerDied","Data":"59ae68400042c0752360d415556a576607084832c30417aa8a6c99181ae49214"} Jan 31 04:39:42 crc kubenswrapper[4931]: I0131 04:39:42.727178 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"2ee0bbbf-b9b8-408a-9c09-8b1655718106","Type":"ContainerStarted","Data":"350813df16df252d0ddeb2d201e1cc99cec62de571a5c8075348460176e18593"} Jan 31 04:39:42 crc kubenswrapper[4931]: I0131 04:39:42.801957 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-1" podStartSLOduration=7.481625012 podStartE2EDuration="41.801936059s" podCreationTimestamp="2026-01-31 04:39:01 +0000 UTC" firstStartedPulling="2026-01-31 04:39:03.075539812 +0000 UTC m=+901.884768686" lastFinishedPulling="2026-01-31 04:39:37.395850849 +0000 UTC m=+936.205079733" observedRunningTime="2026-01-31 04:39:42.799760628 +0000 UTC m=+941.608989522" watchObservedRunningTime="2026-01-31 04:39:42.801936059 +0000 UTC m=+941.611164933" Jan 31 04:39:43 crc kubenswrapper[4931]: I0131 04:39:43.734619 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"f7186a32-8d8b-433c-b191-86787137c1d1","Type":"ContainerStarted","Data":"62bf8626a890350858c82060eb1127945a043c5cbc1bdcec9765eb77e6f37b05"} Jan 31 04:39:43 crc kubenswrapper[4931]: I0131 04:39:43.756543 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-2" podStartSLOduration=-9223371994.098253 podStartE2EDuration="42.756522039s" podCreationTimestamp="2026-01-31 04:39:01 +0000 UTC" firstStartedPulling="2026-01-31 04:39:03.144327006 +0000 UTC m=+901.953555880" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:39:43.753097053 +0000 UTC m=+942.562325927" watchObservedRunningTime="2026-01-31 04:39:43.756522039 +0000 UTC m=+942.565750913" Jan 31 04:39:43 crc kubenswrapper[4931]: E0131 04:39:43.949286 4931 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:55138->38.102.83.179:42537: write tcp 38.102.83.179:55138->38.102.83.179:42537: write: broken pipe Jan 31 04:39:44 crc kubenswrapper[4931]: I0131 04:39:44.742091 4931 generic.go:334] "Generic (PLEG): container finished" podID="4c4e50a2-14c9-4128-b467-67e66bd4b0ed" containerID="fafc025dc6a4285e91ad6b6ca135fcce33d215bf0f3249d8c281698c37a7d41c" exitCode=0 Jan 31 04:39:44 crc kubenswrapper[4931]: I0131 04:39:44.742192 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"4c4e50a2-14c9-4128-b467-67e66bd4b0ed","Type":"ContainerDied","Data":"fafc025dc6a4285e91ad6b6ca135fcce33d215bf0f3249d8c281698c37a7d41c"} Jan 31 04:39:45 crc kubenswrapper[4931]: I0131 04:39:45.750835 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"4c4e50a2-14c9-4128-b467-67e66bd4b0ed","Type":"ContainerStarted","Data":"f8657aaa18a1575def0c79a603125b3aecd2a590df9bec798a837b922d42fe67"} Jan 31 04:39:45 crc kubenswrapper[4931]: I0131 04:39:45.769428 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-0" podStartSLOduration=-9223371992.085367 podStartE2EDuration="44.769408576s" podCreationTimestamp="2026-01-31 04:39:01 +0000 UTC" firstStartedPulling="2026-01-31 04:39:03.005485442 +0000 UTC m=+901.814714306" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:39:45.76776875 +0000 UTC m=+944.576997624" watchObservedRunningTime="2026-01-31 04:39:45.769408576 +0000 UTC m=+944.578637440" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.043991 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p"] Jan 31 04:39:48 crc kubenswrapper[4931]: E0131 04:39:48.044589 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerName="extract-content" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.044604 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerName="extract-content" Jan 31 04:39:48 crc kubenswrapper[4931]: E0131 04:39:48.044621 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="registry-server" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.044629 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="registry-server" Jan 31 04:39:48 crc kubenswrapper[4931]: E0131 04:39:48.044642 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="extract-content" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.044651 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="extract-content" Jan 31 04:39:48 crc kubenswrapper[4931]: E0131 04:39:48.044665 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="extract-utilities" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.044672 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="extract-utilities" Jan 31 04:39:48 crc kubenswrapper[4931]: E0131 04:39:48.044685 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerName="extract-utilities" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.044693 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerName="extract-utilities" Jan 31 04:39:48 crc kubenswrapper[4931]: E0131 04:39:48.044709 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerName="registry-server" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.044716 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerName="registry-server" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.044862 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3638da9a-35df-4043-801f-33c957a61b61" containerName="registry-server" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.044876 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e460b23a-89cd-4d07-8c4f-31e13a8dbbdd" containerName="registry-server" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.045912 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.047957 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rb6qm" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.052545 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p"] Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.234312 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.234381 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.234477 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzbwp\" (UniqueName: \"kubernetes.io/projected/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-kube-api-access-lzbwp\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.335463 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.335520 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.335565 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzbwp\" (UniqueName: \"kubernetes.io/projected/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-kube-api-access-lzbwp\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.336043 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.336062 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.363167 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzbwp\" (UniqueName: \"kubernetes.io/projected/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-kube-api-access-lzbwp\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.364706 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:48 crc kubenswrapper[4931]: I0131 04:39:48.784621 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p"] Jan 31 04:39:48 crc kubenswrapper[4931]: W0131 04:39:48.791793 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5149516f_c8ae_4644_af21_9ad3dd0e6bb3.slice/crio-cdecdabcd50ad04be5f2df0f9c59b73d9e967216b65f121614e743e17900cb0c WatchSource:0}: Error finding container cdecdabcd50ad04be5f2df0f9c59b73d9e967216b65f121614e743e17900cb0c: Status 404 returned error can't find the container with id cdecdabcd50ad04be5f2df0f9c59b73d9e967216b65f121614e743e17900cb0c Jan 31 04:39:49 crc kubenswrapper[4931]: I0131 04:39:49.778504 4931 generic.go:334] "Generic (PLEG): container finished" podID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerID="a52cbdb2fcc4bbf2d18360316b597b7bed17adc0b5f814263d1a698e914a3684" exitCode=0 Jan 31 04:39:49 crc kubenswrapper[4931]: I0131 04:39:49.778563 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" event={"ID":"5149516f-c8ae-4644-af21-9ad3dd0e6bb3","Type":"ContainerDied","Data":"a52cbdb2fcc4bbf2d18360316b597b7bed17adc0b5f814263d1a698e914a3684"} Jan 31 04:39:49 crc kubenswrapper[4931]: I0131 04:39:49.778827 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" event={"ID":"5149516f-c8ae-4644-af21-9ad3dd0e6bb3","Type":"ContainerStarted","Data":"cdecdabcd50ad04be5f2df0f9c59b73d9e967216b65f121614e743e17900cb0c"} Jan 31 04:39:51 crc kubenswrapper[4931]: I0131 04:39:51.133158 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:39:51 crc kubenswrapper[4931]: I0131 04:39:51.133272 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:39:51 crc kubenswrapper[4931]: I0131 04:39:51.794837 4931 generic.go:334] "Generic (PLEG): container finished" podID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerID="5992e4a4f5d162f32df236526ba1878071291ec9cb162a85ea9be6995a57a263" exitCode=0 Jan 31 04:39:51 crc kubenswrapper[4931]: I0131 04:39:51.794937 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" event={"ID":"5149516f-c8ae-4644-af21-9ad3dd0e6bb3","Type":"ContainerDied","Data":"5992e4a4f5d162f32df236526ba1878071291ec9cb162a85ea9be6995a57a263"} Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.616345 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.616820 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.634663 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.635223 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.646616 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.646714 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.712458 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.806583 4931 generic.go:334] "Generic (PLEG): container finished" podID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerID="f582cfbca8a532e641d7a22eb5683639b54497968f2269de4904b9b95e79aef3" exitCode=0 Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.806876 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" event={"ID":"5149516f-c8ae-4644-af21-9ad3dd0e6bb3","Type":"ContainerDied","Data":"f582cfbca8a532e641d7a22eb5683639b54497968f2269de4904b9b95e79aef3"} Jan 31 04:39:52 crc kubenswrapper[4931]: I0131 04:39:52.856303 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.087770 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.233214 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzbwp\" (UniqueName: \"kubernetes.io/projected/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-kube-api-access-lzbwp\") pod \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.233325 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-bundle\") pod \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.233405 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-util\") pod \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\" (UID: \"5149516f-c8ae-4644-af21-9ad3dd0e6bb3\") " Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.234639 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-bundle" (OuterVolumeSpecName: "bundle") pod "5149516f-c8ae-4644-af21-9ad3dd0e6bb3" (UID: "5149516f-c8ae-4644-af21-9ad3dd0e6bb3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.240877 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-kube-api-access-lzbwp" (OuterVolumeSpecName: "kube-api-access-lzbwp") pod "5149516f-c8ae-4644-af21-9ad3dd0e6bb3" (UID: "5149516f-c8ae-4644-af21-9ad3dd0e6bb3"). InnerVolumeSpecName "kube-api-access-lzbwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.335389 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.335454 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzbwp\" (UniqueName: \"kubernetes.io/projected/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-kube-api-access-lzbwp\") on node \"crc\" DevicePath \"\"" Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.418003 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-util" (OuterVolumeSpecName: "util") pod "5149516f-c8ae-4644-af21-9ad3dd0e6bb3" (UID: "5149516f-c8ae-4644-af21-9ad3dd0e6bb3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.437662 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5149516f-c8ae-4644-af21-9ad3dd0e6bb3-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.825700 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" event={"ID":"5149516f-c8ae-4644-af21-9ad3dd0e6bb3","Type":"ContainerDied","Data":"cdecdabcd50ad04be5f2df0f9c59b73d9e967216b65f121614e743e17900cb0c"} Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.825780 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdecdabcd50ad04be5f2df0f9c59b73d9e967216b65f121614e743e17900cb0c" Jan 31 04:39:54 crc kubenswrapper[4931]: I0131 04:39:54.825799 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p" Jan 31 04:40:02 crc kubenswrapper[4931]: I0131 04:40:02.685311 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/openstack-galera-2" podUID="f7186a32-8d8b-433c-b191-86787137c1d1" containerName="galera" probeResult="failure" output=< Jan 31 04:40:02 crc kubenswrapper[4931]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 31 04:40:02 crc kubenswrapper[4931]: > Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.290482 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r"] Jan 31 04:40:03 crc kubenswrapper[4931]: E0131 04:40:03.290827 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerName="pull" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.290844 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerName="pull" Jan 31 04:40:03 crc kubenswrapper[4931]: E0131 04:40:03.290871 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerName="extract" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.290879 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerName="extract" Jan 31 04:40:03 crc kubenswrapper[4931]: E0131 04:40:03.290892 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerName="util" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.290902 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerName="util" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.291026 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5149516f-c8ae-4644-af21-9ad3dd0e6bb3" containerName="extract" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.291591 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.293781 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-m4mvr" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.311481 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r"] Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.362425 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpz6\" (UniqueName: \"kubernetes.io/projected/13a3c5de-c53d-4ec9-ad1d-0d7cca9713ca-kube-api-access-pxpz6\") pod \"rabbitmq-cluster-operator-779fc9694b-z858r\" (UID: \"13a3c5de-c53d-4ec9-ad1d-0d7cca9713ca\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.463865 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpz6\" (UniqueName: \"kubernetes.io/projected/13a3c5de-c53d-4ec9-ad1d-0d7cca9713ca-kube-api-access-pxpz6\") pod \"rabbitmq-cluster-operator-779fc9694b-z858r\" (UID: \"13a3c5de-c53d-4ec9-ad1d-0d7cca9713ca\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.487069 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpz6\" (UniqueName: \"kubernetes.io/projected/13a3c5de-c53d-4ec9-ad1d-0d7cca9713ca-kube-api-access-pxpz6\") pod \"rabbitmq-cluster-operator-779fc9694b-z858r\" (UID: \"13a3c5de-c53d-4ec9-ad1d-0d7cca9713ca\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.548914 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.592492 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 04:40:03 crc kubenswrapper[4931]: I0131 04:40:03.607065 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r" Jan 31 04:40:03 crc kubenswrapper[4931]: E0131 04:40:03.740054 4931 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:49504->38.102.83.179:42537: write tcp 38.102.83.179:49504->38.102.83.179:42537: write: broken pipe Jan 31 04:40:04 crc kubenswrapper[4931]: I0131 04:40:04.105804 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r"] Jan 31 04:40:04 crc kubenswrapper[4931]: I0131 04:40:04.894269 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r" event={"ID":"13a3c5de-c53d-4ec9-ad1d-0d7cca9713ca","Type":"ContainerStarted","Data":"702b5f5b09a72de594ed8bea7fdaef4c7f34dc3ec04baca45d4c513656cb67d9"} Jan 31 04:40:07 crc kubenswrapper[4931]: I0131 04:40:07.854649 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:40:07 crc kubenswrapper[4931]: I0131 04:40:07.904568 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 04:40:08 crc kubenswrapper[4931]: I0131 04:40:08.921447 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r" event={"ID":"13a3c5de-c53d-4ec9-ad1d-0d7cca9713ca","Type":"ContainerStarted","Data":"0a7e795fd2d58da24126ef73c8c41d94e584dbf6dd70756f949e3b7ba7ca4bef"} Jan 31 04:40:08 crc kubenswrapper[4931]: I0131 04:40:08.941830 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-z858r" podStartSLOduration=1.45811385 podStartE2EDuration="5.94180855s" podCreationTimestamp="2026-01-31 04:40:03 +0000 UTC" firstStartedPulling="2026-01-31 04:40:04.111953193 +0000 UTC m=+962.921182067" lastFinishedPulling="2026-01-31 04:40:08.595647893 +0000 UTC m=+967.404876767" observedRunningTime="2026-01-31 04:40:08.934491055 +0000 UTC m=+967.743719929" watchObservedRunningTime="2026-01-31 04:40:08.94180855 +0000 UTC m=+967.751037424" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.430502 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.432411 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.435007 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.435048 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-server-conf" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.435322 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-default-user" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.436706 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-server-dockercfg-4v68t" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.437479 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-plugins-conf" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.468906 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.516236 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3166808f-2786-4207-9cb3-f32437499a16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.516293 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3166808f-2786-4207-9cb3-f32437499a16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.516313 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3166808f-2786-4207-9cb3-f32437499a16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.516337 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3166808f-2786-4207-9cb3-f32437499a16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.516377 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3166808f-2786-4207-9cb3-f32437499a16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.516410 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1ea0980d-684e-48b5-9c58-ee3d51b74b18\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ea0980d-684e-48b5-9c58-ee3d51b74b18\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.516441 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3166808f-2786-4207-9cb3-f32437499a16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.516460 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btz2w\" (UniqueName: \"kubernetes.io/projected/3166808f-2786-4207-9cb3-f32437499a16-kube-api-access-btz2w\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.617812 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3166808f-2786-4207-9cb3-f32437499a16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.617861 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btz2w\" (UniqueName: \"kubernetes.io/projected/3166808f-2786-4207-9cb3-f32437499a16-kube-api-access-btz2w\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.617915 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3166808f-2786-4207-9cb3-f32437499a16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.617950 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3166808f-2786-4207-9cb3-f32437499a16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.617969 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3166808f-2786-4207-9cb3-f32437499a16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.617988 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3166808f-2786-4207-9cb3-f32437499a16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.618009 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3166808f-2786-4207-9cb3-f32437499a16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.618036 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1ea0980d-684e-48b5-9c58-ee3d51b74b18\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ea0980d-684e-48b5-9c58-ee3d51b74b18\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.618806 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3166808f-2786-4207-9cb3-f32437499a16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.620404 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3166808f-2786-4207-9cb3-f32437499a16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.620712 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3166808f-2786-4207-9cb3-f32437499a16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.622695 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.622753 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1ea0980d-684e-48b5-9c58-ee3d51b74b18\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ea0980d-684e-48b5-9c58-ee3d51b74b18\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/833470f2088f309c093508f999cdffd4febe531d45d9f5f2fe56bc4462a9339a/globalmount\"" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.624796 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3166808f-2786-4207-9cb3-f32437499a16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.625711 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3166808f-2786-4207-9cb3-f32437499a16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.635078 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3166808f-2786-4207-9cb3-f32437499a16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.637900 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btz2w\" (UniqueName: \"kubernetes.io/projected/3166808f-2786-4207-9cb3-f32437499a16-kube-api-access-btz2w\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.658300 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1ea0980d-684e-48b5-9c58-ee3d51b74b18\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ea0980d-684e-48b5-9c58-ee3d51b74b18\") pod \"rabbitmq-server-0\" (UID: \"3166808f-2786-4207-9cb3-f32437499a16\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:14 crc kubenswrapper[4931]: I0131 04:40:14.757424 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:40:15 crc kubenswrapper[4931]: I0131 04:40:15.230189 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 04:40:15 crc kubenswrapper[4931]: I0131 04:40:15.981210 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"3166808f-2786-4207-9cb3-f32437499a16","Type":"ContainerStarted","Data":"0d023aa866412a3fecfa12e7c36e3d7a0c9fdcf3d9e87406b23d3058331e6e8d"} Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.011061 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-cjqh2"] Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.011788 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-cjqh2" Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.013600 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-bx626" Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.029110 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-cjqh2"] Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.041171 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjqn\" (UniqueName: \"kubernetes.io/projected/14ff67e1-aa92-4a09-94e4-d96a354498d4-kube-api-access-chjqn\") pod \"keystone-operator-index-cjqh2\" (UID: \"14ff67e1-aa92-4a09-94e4-d96a354498d4\") " pod="openstack-operators/keystone-operator-index-cjqh2" Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.142372 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjqn\" (UniqueName: \"kubernetes.io/projected/14ff67e1-aa92-4a09-94e4-d96a354498d4-kube-api-access-chjqn\") pod \"keystone-operator-index-cjqh2\" (UID: \"14ff67e1-aa92-4a09-94e4-d96a354498d4\") " pod="openstack-operators/keystone-operator-index-cjqh2" Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.181605 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjqn\" (UniqueName: \"kubernetes.io/projected/14ff67e1-aa92-4a09-94e4-d96a354498d4-kube-api-access-chjqn\") pod \"keystone-operator-index-cjqh2\" (UID: \"14ff67e1-aa92-4a09-94e4-d96a354498d4\") " pod="openstack-operators/keystone-operator-index-cjqh2" Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.340839 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-cjqh2" Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.772119 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-cjqh2"] Jan 31 04:40:16 crc kubenswrapper[4931]: I0131 04:40:16.990204 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-cjqh2" event={"ID":"14ff67e1-aa92-4a09-94e4-d96a354498d4","Type":"ContainerStarted","Data":"d49252ff8bca07e0f8348374db05ac2d35f9b050f92eb37b96a9a7e0eb371379"} Jan 31 04:40:21 crc kubenswrapper[4931]: I0131 04:40:21.132986 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:40:21 crc kubenswrapper[4931]: I0131 04:40:21.133361 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:40:21 crc kubenswrapper[4931]: I0131 04:40:21.133406 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:40:21 crc kubenswrapper[4931]: I0131 04:40:21.133918 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ceca043a424e437ecc5528abaae063cab9d2263bd81f19cc01aa29b5f8717c7"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:40:21 crc kubenswrapper[4931]: I0131 04:40:21.133960 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://1ceca043a424e437ecc5528abaae063cab9d2263bd81f19cc01aa29b5f8717c7" gracePeriod=600 Jan 31 04:40:22 crc kubenswrapper[4931]: I0131 04:40:22.031380 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="1ceca043a424e437ecc5528abaae063cab9d2263bd81f19cc01aa29b5f8717c7" exitCode=0 Jan 31 04:40:22 crc kubenswrapper[4931]: I0131 04:40:22.031424 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"1ceca043a424e437ecc5528abaae063cab9d2263bd81f19cc01aa29b5f8717c7"} Jan 31 04:40:22 crc kubenswrapper[4931]: I0131 04:40:22.031460 4931 scope.go:117] "RemoveContainer" containerID="c4a3ca51a63c7b2c90394d04ff6f72a437e5de1d31d438d34041e2aabc18bbc0" Jan 31 04:40:28 crc kubenswrapper[4931]: I0131 04:40:28.078225 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"f6fdd63c2992141cc392f9894235afa4f2697a4d0af5fdfafac1d9c21aba8ff3"} Jan 31 04:40:28 crc kubenswrapper[4931]: I0131 04:40:28.079860 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-cjqh2" event={"ID":"14ff67e1-aa92-4a09-94e4-d96a354498d4","Type":"ContainerStarted","Data":"7b0d6e083031370dd196c64d8c9f2fcf4e74a221436d2c943fc98d8944d34f8e"} Jan 31 04:40:28 crc kubenswrapper[4931]: I0131 04:40:28.118282 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-cjqh2" podStartSLOduration=3.5287619120000002 podStartE2EDuration="13.118256105s" podCreationTimestamp="2026-01-31 04:40:15 +0000 UTC" firstStartedPulling="2026-01-31 04:40:16.78192065 +0000 UTC m=+975.591149544" lastFinishedPulling="2026-01-31 04:40:26.371414863 +0000 UTC m=+985.180643737" observedRunningTime="2026-01-31 04:40:28.1115573 +0000 UTC m=+986.920786174" watchObservedRunningTime="2026-01-31 04:40:28.118256105 +0000 UTC m=+986.927484989" Jan 31 04:40:30 crc kubenswrapper[4931]: I0131 04:40:30.095554 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"3166808f-2786-4207-9cb3-f32437499a16","Type":"ContainerStarted","Data":"23d903fc5e161280e875e4f2df79c9cd2d8aef67b45d0c9780266f89f9f3f3e8"} Jan 31 04:40:36 crc kubenswrapper[4931]: I0131 04:40:36.341439 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-cjqh2" Jan 31 04:40:36 crc kubenswrapper[4931]: I0131 04:40:36.342677 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-cjqh2" Jan 31 04:40:36 crc kubenswrapper[4931]: I0131 04:40:36.384798 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-cjqh2" Jan 31 04:40:37 crc kubenswrapper[4931]: I0131 04:40:37.176682 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-cjqh2" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.231580 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2qxtp"] Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.233591 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2qxtp"] Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.233962 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.315881 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4gs6\" (UniqueName: \"kubernetes.io/projected/09a78d77-0c67-4687-bf66-8508ae6a5691-kube-api-access-v4gs6\") pod \"community-operators-2qxtp\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.316022 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-utilities\") pod \"community-operators-2qxtp\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.316068 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-catalog-content\") pod \"community-operators-2qxtp\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.417956 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-utilities\") pod \"community-operators-2qxtp\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.418034 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-catalog-content\") pod \"community-operators-2qxtp\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.418075 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4gs6\" (UniqueName: \"kubernetes.io/projected/09a78d77-0c67-4687-bf66-8508ae6a5691-kube-api-access-v4gs6\") pod \"community-operators-2qxtp\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.418863 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-utilities\") pod \"community-operators-2qxtp\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.422193 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-catalog-content\") pod \"community-operators-2qxtp\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.442994 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4gs6\" (UniqueName: \"kubernetes.io/projected/09a78d77-0c67-4687-bf66-8508ae6a5691-kube-api-access-v4gs6\") pod \"community-operators-2qxtp\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:41 crc kubenswrapper[4931]: I0131 04:40:41.581032 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:44 crc kubenswrapper[4931]: I0131 04:40:44.490817 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2qxtp"] Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.215751 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxtp" event={"ID":"09a78d77-0c67-4687-bf66-8508ae6a5691","Type":"ContainerStarted","Data":"379c62d529804ea34ca0811a5284a31d8c895c1151f7c9ed80dc33ed74c1ad08"} Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.255563 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr"] Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.257254 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.259071 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rb6qm" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.265808 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr"] Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.273090 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-util\") pod \"e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.273303 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltqqh\" (UniqueName: \"kubernetes.io/projected/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-kube-api-access-ltqqh\") pod \"e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.273450 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-bundle\") pod \"e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.374664 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-bundle\") pod \"e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.375131 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-util\") pod \"e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.375190 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltqqh\" (UniqueName: \"kubernetes.io/projected/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-kube-api-access-ltqqh\") pod \"e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.375507 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-bundle\") pod \"e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.375606 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-util\") pod \"e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.401649 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltqqh\" (UniqueName: \"kubernetes.io/projected/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-kube-api-access-ltqqh\") pod \"e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:45 crc kubenswrapper[4931]: I0131 04:40:45.575738 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:46 crc kubenswrapper[4931]: I0131 04:40:46.012849 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr"] Jan 31 04:40:46 crc kubenswrapper[4931]: W0131 04:40:46.028586 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7d13ccf_2b66_4e0f_9ccf_706004dbccaa.slice/crio-6f7934813ecbd2da717dc100a1f0eda192ffa9758f886c3ec4cc8deefbae2120 WatchSource:0}: Error finding container 6f7934813ecbd2da717dc100a1f0eda192ffa9758f886c3ec4cc8deefbae2120: Status 404 returned error can't find the container with id 6f7934813ecbd2da717dc100a1f0eda192ffa9758f886c3ec4cc8deefbae2120 Jan 31 04:40:46 crc kubenswrapper[4931]: I0131 04:40:46.236775 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" event={"ID":"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa","Type":"ContainerStarted","Data":"6f7934813ecbd2da717dc100a1f0eda192ffa9758f886c3ec4cc8deefbae2120"} Jan 31 04:40:46 crc kubenswrapper[4931]: I0131 04:40:46.239766 4931 generic.go:334] "Generic (PLEG): container finished" podID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerID="b11855da34a145f2598ce82059f09582ac6b36ad19e00124214f1d7d18ef7e17" exitCode=0 Jan 31 04:40:46 crc kubenswrapper[4931]: I0131 04:40:46.239821 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxtp" event={"ID":"09a78d77-0c67-4687-bf66-8508ae6a5691","Type":"ContainerDied","Data":"b11855da34a145f2598ce82059f09582ac6b36ad19e00124214f1d7d18ef7e17"} Jan 31 04:40:47 crc kubenswrapper[4931]: I0131 04:40:47.248913 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxtp" event={"ID":"09a78d77-0c67-4687-bf66-8508ae6a5691","Type":"ContainerStarted","Data":"263506435a890dc3988b339a5247da71af54d7c4763bc111087506f83c228033"} Jan 31 04:40:47 crc kubenswrapper[4931]: I0131 04:40:47.251648 4931 generic.go:334] "Generic (PLEG): container finished" podID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerID="9439e9b54c355c6836dc44a1aa2de9c8573d9de31e81bb6a011a3e6e04cda37e" exitCode=0 Jan 31 04:40:47 crc kubenswrapper[4931]: I0131 04:40:47.251686 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" event={"ID":"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa","Type":"ContainerDied","Data":"9439e9b54c355c6836dc44a1aa2de9c8573d9de31e81bb6a011a3e6e04cda37e"} Jan 31 04:40:48 crc kubenswrapper[4931]: I0131 04:40:48.266448 4931 generic.go:334] "Generic (PLEG): container finished" podID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerID="263506435a890dc3988b339a5247da71af54d7c4763bc111087506f83c228033" exitCode=0 Jan 31 04:40:48 crc kubenswrapper[4931]: I0131 04:40:48.266800 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxtp" event={"ID":"09a78d77-0c67-4687-bf66-8508ae6a5691","Type":"ContainerDied","Data":"263506435a890dc3988b339a5247da71af54d7c4763bc111087506f83c228033"} Jan 31 04:40:49 crc kubenswrapper[4931]: I0131 04:40:49.272924 4931 generic.go:334] "Generic (PLEG): container finished" podID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerID="4ff656c0b2f84f2fd2e931d9c10aadf05271a49e8e3f79c841cfcd4a10adad24" exitCode=0 Jan 31 04:40:49 crc kubenswrapper[4931]: I0131 04:40:49.272965 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" event={"ID":"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa","Type":"ContainerDied","Data":"4ff656c0b2f84f2fd2e931d9c10aadf05271a49e8e3f79c841cfcd4a10adad24"} Jan 31 04:40:50 crc kubenswrapper[4931]: I0131 04:40:50.286361 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxtp" event={"ID":"09a78d77-0c67-4687-bf66-8508ae6a5691","Type":"ContainerStarted","Data":"fe022d220f77fe81375ecda9cbe4e10f9ad7cddaef8c32a4e1ba16c5b9c4dadc"} Jan 31 04:40:50 crc kubenswrapper[4931]: I0131 04:40:50.289660 4931 generic.go:334] "Generic (PLEG): container finished" podID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerID="0a3c2f0cd143be777add08e2c51dcffbc5a08852780778825918395b77723b8e" exitCode=0 Jan 31 04:40:50 crc kubenswrapper[4931]: I0131 04:40:50.289734 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" event={"ID":"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa","Type":"ContainerDied","Data":"0a3c2f0cd143be777add08e2c51dcffbc5a08852780778825918395b77723b8e"} Jan 31 04:40:50 crc kubenswrapper[4931]: I0131 04:40:50.313933 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2qxtp" podStartSLOduration=6.202730133 podStartE2EDuration="9.313906213s" podCreationTimestamp="2026-01-31 04:40:41 +0000 UTC" firstStartedPulling="2026-01-31 04:40:46.241838558 +0000 UTC m=+1005.051067472" lastFinishedPulling="2026-01-31 04:40:49.353014678 +0000 UTC m=+1008.162243552" observedRunningTime="2026-01-31 04:40:50.303773903 +0000 UTC m=+1009.113002817" watchObservedRunningTime="2026-01-31 04:40:50.313906213 +0000 UTC m=+1009.123135087" Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.529240 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.577928 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-util\") pod \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.578106 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltqqh\" (UniqueName: \"kubernetes.io/projected/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-kube-api-access-ltqqh\") pod \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.578137 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-bundle\") pod \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\" (UID: \"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa\") " Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.579436 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-bundle" (OuterVolumeSpecName: "bundle") pod "d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" (UID: "d7d13ccf-2b66-4e0f-9ccf-706004dbccaa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.581875 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.581938 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.593941 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-kube-api-access-ltqqh" (OuterVolumeSpecName: "kube-api-access-ltqqh") pod "d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" (UID: "d7d13ccf-2b66-4e0f-9ccf-706004dbccaa"). InnerVolumeSpecName "kube-api-access-ltqqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.626502 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.680031 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltqqh\" (UniqueName: \"kubernetes.io/projected/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-kube-api-access-ltqqh\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:51 crc kubenswrapper[4931]: I0131 04:40:51.680076 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:52 crc kubenswrapper[4931]: I0131 04:40:52.078289 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-util" (OuterVolumeSpecName: "util") pod "d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" (UID: "d7d13ccf-2b66-4e0f-9ccf-706004dbccaa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:40:52 crc kubenswrapper[4931]: I0131 04:40:52.087039 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7d13ccf-2b66-4e0f-9ccf-706004dbccaa-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:52 crc kubenswrapper[4931]: I0131 04:40:52.307009 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" Jan 31 04:40:52 crc kubenswrapper[4931]: I0131 04:40:52.307004 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr" event={"ID":"d7d13ccf-2b66-4e0f-9ccf-706004dbccaa","Type":"ContainerDied","Data":"6f7934813ecbd2da717dc100a1f0eda192ffa9758f886c3ec4cc8deefbae2120"} Jan 31 04:40:52 crc kubenswrapper[4931]: I0131 04:40:52.307107 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f7934813ecbd2da717dc100a1f0eda192ffa9758f886c3ec4cc8deefbae2120" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.362463 4931 generic.go:334] "Generic (PLEG): container finished" podID="3166808f-2786-4207-9cb3-f32437499a16" containerID="23d903fc5e161280e875e4f2df79c9cd2d8aef67b45d0c9780266f89f9f3f3e8" exitCode=0 Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.362543 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"3166808f-2786-4207-9cb3-f32437499a16","Type":"ContainerDied","Data":"23d903fc5e161280e875e4f2df79c9cd2d8aef67b45d0c9780266f89f9f3f3e8"} Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.586244 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r"] Jan 31 04:41:01 crc kubenswrapper[4931]: E0131 04:41:01.586767 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerName="util" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.586780 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerName="util" Jan 31 04:41:01 crc kubenswrapper[4931]: E0131 04:41:01.586808 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerName="pull" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.586816 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerName="pull" Jan 31 04:41:01 crc kubenswrapper[4931]: E0131 04:41:01.586826 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerName="extract" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.586834 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerName="extract" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.586968 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d13ccf-2b66-4e0f-9ccf-706004dbccaa" containerName="extract" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.587569 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.590077 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ctrnd" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.590254 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.632180 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r"] Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.635518 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.639410 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2s5z\" (UniqueName: \"kubernetes.io/projected/9de4c2dc-4248-48ff-9eba-77bb5f41af6e-kube-api-access-p2s5z\") pod \"keystone-operator-controller-manager-78d69b64d-9c79r\" (UID: \"9de4c2dc-4248-48ff-9eba-77bb5f41af6e\") " pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.639457 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9de4c2dc-4248-48ff-9eba-77bb5f41af6e-apiservice-cert\") pod \"keystone-operator-controller-manager-78d69b64d-9c79r\" (UID: \"9de4c2dc-4248-48ff-9eba-77bb5f41af6e\") " pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.639485 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9de4c2dc-4248-48ff-9eba-77bb5f41af6e-webhook-cert\") pod \"keystone-operator-controller-manager-78d69b64d-9c79r\" (UID: \"9de4c2dc-4248-48ff-9eba-77bb5f41af6e\") " pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.742003 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9de4c2dc-4248-48ff-9eba-77bb5f41af6e-webhook-cert\") pod \"keystone-operator-controller-manager-78d69b64d-9c79r\" (UID: \"9de4c2dc-4248-48ff-9eba-77bb5f41af6e\") " pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.742133 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2s5z\" (UniqueName: \"kubernetes.io/projected/9de4c2dc-4248-48ff-9eba-77bb5f41af6e-kube-api-access-p2s5z\") pod \"keystone-operator-controller-manager-78d69b64d-9c79r\" (UID: \"9de4c2dc-4248-48ff-9eba-77bb5f41af6e\") " pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.742156 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9de4c2dc-4248-48ff-9eba-77bb5f41af6e-apiservice-cert\") pod \"keystone-operator-controller-manager-78d69b64d-9c79r\" (UID: \"9de4c2dc-4248-48ff-9eba-77bb5f41af6e\") " pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.746591 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9de4c2dc-4248-48ff-9eba-77bb5f41af6e-webhook-cert\") pod \"keystone-operator-controller-manager-78d69b64d-9c79r\" (UID: \"9de4c2dc-4248-48ff-9eba-77bb5f41af6e\") " pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.748338 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9de4c2dc-4248-48ff-9eba-77bb5f41af6e-apiservice-cert\") pod \"keystone-operator-controller-manager-78d69b64d-9c79r\" (UID: \"9de4c2dc-4248-48ff-9eba-77bb5f41af6e\") " pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.759918 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2s5z\" (UniqueName: \"kubernetes.io/projected/9de4c2dc-4248-48ff-9eba-77bb5f41af6e-kube-api-access-p2s5z\") pod \"keystone-operator-controller-manager-78d69b64d-9c79r\" (UID: \"9de4c2dc-4248-48ff-9eba-77bb5f41af6e\") " pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.917069 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ctrnd" Jan 31 04:41:01 crc kubenswrapper[4931]: I0131 04:41:01.925938 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:02 crc kubenswrapper[4931]: I0131 04:41:02.371242 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"3166808f-2786-4207-9cb3-f32437499a16","Type":"ContainerStarted","Data":"bea07f4e1cc08a9c33de714f15dfb6af755b1512d502eb42ce325b634e97e7fe"} Jan 31 04:41:02 crc kubenswrapper[4931]: I0131 04:41:02.372999 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:41:02 crc kubenswrapper[4931]: I0131 04:41:02.395638 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r"] Jan 31 04:41:02 crc kubenswrapper[4931]: I0131 04:41:02.415373 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.789905674 podStartE2EDuration="49.41535846s" podCreationTimestamp="2026-01-31 04:40:13 +0000 UTC" firstStartedPulling="2026-01-31 04:40:15.249186026 +0000 UTC m=+974.058414920" lastFinishedPulling="2026-01-31 04:40:27.874638822 +0000 UTC m=+986.683867706" observedRunningTime="2026-01-31 04:41:02.411833573 +0000 UTC m=+1021.221062467" watchObservedRunningTime="2026-01-31 04:41:02.41535846 +0000 UTC m=+1021.224587334" Jan 31 04:41:03 crc kubenswrapper[4931]: I0131 04:41:03.378896 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" event={"ID":"9de4c2dc-4248-48ff-9eba-77bb5f41af6e","Type":"ContainerStarted","Data":"8b6700da49fa720200193a4039c64df0db3e714a7d90d3b0e402d52bc22e2148"} Jan 31 04:41:05 crc kubenswrapper[4931]: I0131 04:41:05.205057 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2qxtp"] Jan 31 04:41:05 crc kubenswrapper[4931]: I0131 04:41:05.206924 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2qxtp" podUID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerName="registry-server" containerID="cri-o://fe022d220f77fe81375ecda9cbe4e10f9ad7cddaef8c32a4e1ba16c5b9c4dadc" gracePeriod=2 Jan 31 04:41:05 crc kubenswrapper[4931]: I0131 04:41:05.397794 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" event={"ID":"9de4c2dc-4248-48ff-9eba-77bb5f41af6e","Type":"ContainerStarted","Data":"99da0aacfa0ea477f2018a5dab32cc9959880d1378ab4500591d196429d2ede4"} Jan 31 04:41:05 crc kubenswrapper[4931]: I0131 04:41:05.397840 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" event={"ID":"9de4c2dc-4248-48ff-9eba-77bb5f41af6e","Type":"ContainerStarted","Data":"87a2336052e19cc54a46372fa8e7576b9c766f1a523cbba194656768e67824a9"} Jan 31 04:41:05 crc kubenswrapper[4931]: I0131 04:41:05.397889 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:05 crc kubenswrapper[4931]: I0131 04:41:05.400511 4931 generic.go:334] "Generic (PLEG): container finished" podID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerID="fe022d220f77fe81375ecda9cbe4e10f9ad7cddaef8c32a4e1ba16c5b9c4dadc" exitCode=0 Jan 31 04:41:05 crc kubenswrapper[4931]: I0131 04:41:05.400554 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxtp" event={"ID":"09a78d77-0c67-4687-bf66-8508ae6a5691","Type":"ContainerDied","Data":"fe022d220f77fe81375ecda9cbe4e10f9ad7cddaef8c32a4e1ba16c5b9c4dadc"} Jan 31 04:41:05 crc kubenswrapper[4931]: I0131 04:41:05.437248 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" podStartSLOduration=2.239043459 podStartE2EDuration="4.437191515s" podCreationTimestamp="2026-01-31 04:41:01 +0000 UTC" firstStartedPulling="2026-01-31 04:41:02.415779502 +0000 UTC m=+1021.225008376" lastFinishedPulling="2026-01-31 04:41:04.613927558 +0000 UTC m=+1023.423156432" observedRunningTime="2026-01-31 04:41:05.431034016 +0000 UTC m=+1024.240262900" watchObservedRunningTime="2026-01-31 04:41:05.437191515 +0000 UTC m=+1024.246420389" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.399308 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.408398 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxtp" event={"ID":"09a78d77-0c67-4687-bf66-8508ae6a5691","Type":"ContainerDied","Data":"379c62d529804ea34ca0811a5284a31d8c895c1151f7c9ed80dc33ed74c1ad08"} Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.408466 4931 scope.go:117] "RemoveContainer" containerID="fe022d220f77fe81375ecda9cbe4e10f9ad7cddaef8c32a4e1ba16c5b9c4dadc" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.408465 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2qxtp" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.425102 4931 scope.go:117] "RemoveContainer" containerID="263506435a890dc3988b339a5247da71af54d7c4763bc111087506f83c228033" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.453537 4931 scope.go:117] "RemoveContainer" containerID="b11855da34a145f2598ce82059f09582ac6b36ad19e00124214f1d7d18ef7e17" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.525257 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4gs6\" (UniqueName: \"kubernetes.io/projected/09a78d77-0c67-4687-bf66-8508ae6a5691-kube-api-access-v4gs6\") pod \"09a78d77-0c67-4687-bf66-8508ae6a5691\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.525346 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-utilities\") pod \"09a78d77-0c67-4687-bf66-8508ae6a5691\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.525384 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-catalog-content\") pod \"09a78d77-0c67-4687-bf66-8508ae6a5691\" (UID: \"09a78d77-0c67-4687-bf66-8508ae6a5691\") " Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.526321 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-utilities" (OuterVolumeSpecName: "utilities") pod "09a78d77-0c67-4687-bf66-8508ae6a5691" (UID: "09a78d77-0c67-4687-bf66-8508ae6a5691"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.530879 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a78d77-0c67-4687-bf66-8508ae6a5691-kube-api-access-v4gs6" (OuterVolumeSpecName: "kube-api-access-v4gs6") pod "09a78d77-0c67-4687-bf66-8508ae6a5691" (UID: "09a78d77-0c67-4687-bf66-8508ae6a5691"). InnerVolumeSpecName "kube-api-access-v4gs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.571300 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09a78d77-0c67-4687-bf66-8508ae6a5691" (UID: "09a78d77-0c67-4687-bf66-8508ae6a5691"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.627879 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4gs6\" (UniqueName: \"kubernetes.io/projected/09a78d77-0c67-4687-bf66-8508ae6a5691-kube-api-access-v4gs6\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.627923 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.627934 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a78d77-0c67-4687-bf66-8508ae6a5691-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.748290 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2qxtp"] Jan 31 04:41:06 crc kubenswrapper[4931]: I0131 04:41:06.754799 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2qxtp"] Jan 31 04:41:07 crc kubenswrapper[4931]: I0131 04:41:07.904953 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a78d77-0c67-4687-bf66-8508ae6a5691" path="/var/lib/kubelet/pods/09a78d77-0c67-4687-bf66-8508ae6a5691/volumes" Jan 31 04:41:11 crc kubenswrapper[4931]: I0131 04:41:11.932695 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-78d69b64d-9c79r" Jan 31 04:41:14 crc kubenswrapper[4931]: I0131 04:41:14.759947 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.230858 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-create-kh978"] Jan 31 04:41:16 crc kubenswrapper[4931]: E0131 04:41:16.231400 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerName="registry-server" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.231413 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerName="registry-server" Jan 31 04:41:16 crc kubenswrapper[4931]: E0131 04:41:16.231441 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerName="extract-content" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.231449 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerName="extract-content" Jan 31 04:41:16 crc kubenswrapper[4931]: E0131 04:41:16.231466 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerName="extract-utilities" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.231475 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerName="extract-utilities" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.231590 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a78d77-0c67-4687-bf66-8508ae6a5691" containerName="registry-server" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.232271 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-kh978" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.239227 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-kh978"] Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.271869 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn6xw\" (UniqueName: \"kubernetes.io/projected/1b088d08-99db-4f24-af21-ac85849692c5-kube-api-access-fn6xw\") pod \"keystone-db-create-kh978\" (UID: \"1b088d08-99db-4f24-af21-ac85849692c5\") " pod="glance-kuttl-tests/keystone-db-create-kh978" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.373626 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn6xw\" (UniqueName: \"kubernetes.io/projected/1b088d08-99db-4f24-af21-ac85849692c5-kube-api-access-fn6xw\") pod \"keystone-db-create-kh978\" (UID: \"1b088d08-99db-4f24-af21-ac85849692c5\") " pod="glance-kuttl-tests/keystone-db-create-kh978" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.397300 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn6xw\" (UniqueName: \"kubernetes.io/projected/1b088d08-99db-4f24-af21-ac85849692c5-kube-api-access-fn6xw\") pod \"keystone-db-create-kh978\" (UID: \"1b088d08-99db-4f24-af21-ac85849692c5\") " pod="glance-kuttl-tests/keystone-db-create-kh978" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.412890 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-kxpz6"] Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.413697 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-kxpz6" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.416544 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-index-dockercfg-nxlgr" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.427045 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-kxpz6"] Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.475446 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcpm\" (UniqueName: \"kubernetes.io/projected/6c3b4883-c08d-4a23-96e7-d3bb0d0cfea3-kube-api-access-vzcpm\") pod \"horizon-operator-index-kxpz6\" (UID: \"6c3b4883-c08d-4a23-96e7-d3bb0d0cfea3\") " pod="openstack-operators/horizon-operator-index-kxpz6" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.550834 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-kh978" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.576467 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcpm\" (UniqueName: \"kubernetes.io/projected/6c3b4883-c08d-4a23-96e7-d3bb0d0cfea3-kube-api-access-vzcpm\") pod \"horizon-operator-index-kxpz6\" (UID: \"6c3b4883-c08d-4a23-96e7-d3bb0d0cfea3\") " pod="openstack-operators/horizon-operator-index-kxpz6" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.595506 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcpm\" (UniqueName: \"kubernetes.io/projected/6c3b4883-c08d-4a23-96e7-d3bb0d0cfea3-kube-api-access-vzcpm\") pod \"horizon-operator-index-kxpz6\" (UID: \"6c3b4883-c08d-4a23-96e7-d3bb0d0cfea3\") " pod="openstack-operators/horizon-operator-index-kxpz6" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.740138 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-kxpz6" Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.917477 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-kxpz6"] Jan 31 04:41:16 crc kubenswrapper[4931]: I0131 04:41:16.959858 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-kh978"] Jan 31 04:41:16 crc kubenswrapper[4931]: W0131 04:41:16.963898 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b088d08_99db_4f24_af21_ac85849692c5.slice/crio-1b9de15470e45b44a6bb87813ae8dc42b9a93f19729f1ce803d8e7b13f4396ef WatchSource:0}: Error finding container 1b9de15470e45b44a6bb87813ae8dc42b9a93f19729f1ce803d8e7b13f4396ef: Status 404 returned error can't find the container with id 1b9de15470e45b44a6bb87813ae8dc42b9a93f19729f1ce803d8e7b13f4396ef Jan 31 04:41:17 crc kubenswrapper[4931]: I0131 04:41:17.479463 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-kxpz6" event={"ID":"6c3b4883-c08d-4a23-96e7-d3bb0d0cfea3","Type":"ContainerStarted","Data":"1950e4dca876df4ed3cee68b79c064a50dfbf69d702139fd14d93c7fa497257f"} Jan 31 04:41:17 crc kubenswrapper[4931]: I0131 04:41:17.480905 4931 generic.go:334] "Generic (PLEG): container finished" podID="1b088d08-99db-4f24-af21-ac85849692c5" containerID="7ff587e7d4b5b26acacc69b2e25286a7f18280a51aab704de20d260734969ee4" exitCode=0 Jan 31 04:41:17 crc kubenswrapper[4931]: I0131 04:41:17.480950 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-kh978" event={"ID":"1b088d08-99db-4f24-af21-ac85849692c5","Type":"ContainerDied","Data":"7ff587e7d4b5b26acacc69b2e25286a7f18280a51aab704de20d260734969ee4"} Jan 31 04:41:17 crc kubenswrapper[4931]: I0131 04:41:17.480977 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-kh978" event={"ID":"1b088d08-99db-4f24-af21-ac85849692c5","Type":"ContainerStarted","Data":"1b9de15470e45b44a6bb87813ae8dc42b9a93f19729f1ce803d8e7b13f4396ef"} Jan 31 04:41:18 crc kubenswrapper[4931]: I0131 04:41:18.491077 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-kxpz6" event={"ID":"6c3b4883-c08d-4a23-96e7-d3bb0d0cfea3","Type":"ContainerStarted","Data":"94d10928eb479b2ce5d1d965070b6ef5393c25e4a6c76fef46f0513e0bd9ebfc"} Jan 31 04:41:18 crc kubenswrapper[4931]: I0131 04:41:18.513608 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-kxpz6" podStartSLOduration=1.7930340230000001 podStartE2EDuration="2.513594257s" podCreationTimestamp="2026-01-31 04:41:16 +0000 UTC" firstStartedPulling="2026-01-31 04:41:16.922132001 +0000 UTC m=+1035.731360875" lastFinishedPulling="2026-01-31 04:41:17.642692215 +0000 UTC m=+1036.451921109" observedRunningTime="2026-01-31 04:41:18.51006979 +0000 UTC m=+1037.319298674" watchObservedRunningTime="2026-01-31 04:41:18.513594257 +0000 UTC m=+1037.322823131" Jan 31 04:41:18 crc kubenswrapper[4931]: I0131 04:41:18.778650 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-kh978" Jan 31 04:41:18 crc kubenswrapper[4931]: I0131 04:41:18.816265 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn6xw\" (UniqueName: \"kubernetes.io/projected/1b088d08-99db-4f24-af21-ac85849692c5-kube-api-access-fn6xw\") pod \"1b088d08-99db-4f24-af21-ac85849692c5\" (UID: \"1b088d08-99db-4f24-af21-ac85849692c5\") " Jan 31 04:41:18 crc kubenswrapper[4931]: I0131 04:41:18.824112 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b088d08-99db-4f24-af21-ac85849692c5-kube-api-access-fn6xw" (OuterVolumeSpecName: "kube-api-access-fn6xw") pod "1b088d08-99db-4f24-af21-ac85849692c5" (UID: "1b088d08-99db-4f24-af21-ac85849692c5"). InnerVolumeSpecName "kube-api-access-fn6xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:41:18 crc kubenswrapper[4931]: I0131 04:41:18.917987 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn6xw\" (UniqueName: \"kubernetes.io/projected/1b088d08-99db-4f24-af21-ac85849692c5-kube-api-access-fn6xw\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.210615 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-75cgq"] Jan 31 04:41:19 crc kubenswrapper[4931]: E0131 04:41:19.210940 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b088d08-99db-4f24-af21-ac85849692c5" containerName="mariadb-database-create" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.210961 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b088d08-99db-4f24-af21-ac85849692c5" containerName="mariadb-database-create" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.211108 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b088d08-99db-4f24-af21-ac85849692c5" containerName="mariadb-database-create" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.211548 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-75cgq" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.214061 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-rskpw" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.221137 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-75cgq"] Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.221585 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28hm8\" (UniqueName: \"kubernetes.io/projected/c3b14f91-1228-4221-9b36-288f45301065-kube-api-access-28hm8\") pod \"swift-operator-index-75cgq\" (UID: \"c3b14f91-1228-4221-9b36-288f45301065\") " pod="openstack-operators/swift-operator-index-75cgq" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.322194 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28hm8\" (UniqueName: \"kubernetes.io/projected/c3b14f91-1228-4221-9b36-288f45301065-kube-api-access-28hm8\") pod \"swift-operator-index-75cgq\" (UID: \"c3b14f91-1228-4221-9b36-288f45301065\") " pod="openstack-operators/swift-operator-index-75cgq" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.340837 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28hm8\" (UniqueName: \"kubernetes.io/projected/c3b14f91-1228-4221-9b36-288f45301065-kube-api-access-28hm8\") pod \"swift-operator-index-75cgq\" (UID: \"c3b14f91-1228-4221-9b36-288f45301065\") " pod="openstack-operators/swift-operator-index-75cgq" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.500182 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-kh978" event={"ID":"1b088d08-99db-4f24-af21-ac85849692c5","Type":"ContainerDied","Data":"1b9de15470e45b44a6bb87813ae8dc42b9a93f19729f1ce803d8e7b13f4396ef"} Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.500235 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b9de15470e45b44a6bb87813ae8dc42b9a93f19729f1ce803d8e7b13f4396ef" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.500196 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-kh978" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.526586 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-75cgq" Jan 31 04:41:19 crc kubenswrapper[4931]: I0131 04:41:19.958437 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-75cgq"] Jan 31 04:41:19 crc kubenswrapper[4931]: W0131 04:41:19.961946 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b14f91_1228_4221_9b36_288f45301065.slice/crio-180d031b16ac1bea48990b5ab723a3d11bc3f6f9a649d4762b76907819eb564d WatchSource:0}: Error finding container 180d031b16ac1bea48990b5ab723a3d11bc3f6f9a649d4762b76907819eb564d: Status 404 returned error can't find the container with id 180d031b16ac1bea48990b5ab723a3d11bc3f6f9a649d4762b76907819eb564d Jan 31 04:41:20 crc kubenswrapper[4931]: I0131 04:41:20.552865 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-75cgq" event={"ID":"c3b14f91-1228-4221-9b36-288f45301065","Type":"ContainerStarted","Data":"180d031b16ac1bea48990b5ab723a3d11bc3f6f9a649d4762b76907819eb564d"} Jan 31 04:41:21 crc kubenswrapper[4931]: I0131 04:41:21.573872 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-75cgq" event={"ID":"c3b14f91-1228-4221-9b36-288f45301065","Type":"ContainerStarted","Data":"06f5b3ae1cb11891ad0a1dccbeea2600b01690bdfff99a59b5db87a1cace6318"} Jan 31 04:41:21 crc kubenswrapper[4931]: I0131 04:41:21.597256 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-75cgq" podStartSLOduration=1.718455558 podStartE2EDuration="2.597237646s" podCreationTimestamp="2026-01-31 04:41:19 +0000 UTC" firstStartedPulling="2026-01-31 04:41:19.964142844 +0000 UTC m=+1038.773371718" lastFinishedPulling="2026-01-31 04:41:20.842924932 +0000 UTC m=+1039.652153806" observedRunningTime="2026-01-31 04:41:21.593685948 +0000 UTC m=+1040.402914852" watchObservedRunningTime="2026-01-31 04:41:21.597237646 +0000 UTC m=+1040.406466530" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.140490 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-0bae-account-create-xmhsn"] Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.142089 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.145136 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-db-secret" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.151848 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-0bae-account-create-xmhsn"] Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.329105 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pg8h\" (UniqueName: \"kubernetes.io/projected/bcc26d7e-6d49-45a0-bad1-540d998798fb-kube-api-access-5pg8h\") pod \"keystone-0bae-account-create-xmhsn\" (UID: \"bcc26d7e-6d49-45a0-bad1-540d998798fb\") " pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.431857 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pg8h\" (UniqueName: \"kubernetes.io/projected/bcc26d7e-6d49-45a0-bad1-540d998798fb-kube-api-access-5pg8h\") pod \"keystone-0bae-account-create-xmhsn\" (UID: \"bcc26d7e-6d49-45a0-bad1-540d998798fb\") " pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.453543 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pg8h\" (UniqueName: \"kubernetes.io/projected/bcc26d7e-6d49-45a0-bad1-540d998798fb-kube-api-access-5pg8h\") pod \"keystone-0bae-account-create-xmhsn\" (UID: \"bcc26d7e-6d49-45a0-bad1-540d998798fb\") " pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.469165 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.740688 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-index-kxpz6" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.741068 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/horizon-operator-index-kxpz6" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.813599 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/horizon-operator-index-kxpz6" Jan 31 04:41:26 crc kubenswrapper[4931]: I0131 04:41:26.896847 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-0bae-account-create-xmhsn"] Jan 31 04:41:27 crc kubenswrapper[4931]: I0131 04:41:27.638049 4931 generic.go:334] "Generic (PLEG): container finished" podID="bcc26d7e-6d49-45a0-bad1-540d998798fb" containerID="6da3826094312cc2aa4f8e736c661bbf33071c21dd327e35bb962273bca0ea44" exitCode=0 Jan 31 04:41:27 crc kubenswrapper[4931]: I0131 04:41:27.638135 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" event={"ID":"bcc26d7e-6d49-45a0-bad1-540d998798fb","Type":"ContainerDied","Data":"6da3826094312cc2aa4f8e736c661bbf33071c21dd327e35bb962273bca0ea44"} Jan 31 04:41:27 crc kubenswrapper[4931]: I0131 04:41:27.638367 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" event={"ID":"bcc26d7e-6d49-45a0-bad1-540d998798fb","Type":"ContainerStarted","Data":"138bd3506a65b158efcac8db8b4a4e156e35275dda70086f4aeebca6d5e4b57b"} Jan 31 04:41:27 crc kubenswrapper[4931]: I0131 04:41:27.667399 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-index-kxpz6" Jan 31 04:41:28 crc kubenswrapper[4931]: I0131 04:41:28.929493 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.069653 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pg8h\" (UniqueName: \"kubernetes.io/projected/bcc26d7e-6d49-45a0-bad1-540d998798fb-kube-api-access-5pg8h\") pod \"bcc26d7e-6d49-45a0-bad1-540d998798fb\" (UID: \"bcc26d7e-6d49-45a0-bad1-540d998798fb\") " Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.074964 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc26d7e-6d49-45a0-bad1-540d998798fb-kube-api-access-5pg8h" (OuterVolumeSpecName: "kube-api-access-5pg8h") pod "bcc26d7e-6d49-45a0-bad1-540d998798fb" (UID: "bcc26d7e-6d49-45a0-bad1-540d998798fb"). InnerVolumeSpecName "kube-api-access-5pg8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.171587 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pg8h\" (UniqueName: \"kubernetes.io/projected/bcc26d7e-6d49-45a0-bad1-540d998798fb-kube-api-access-5pg8h\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.527541 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-75cgq" Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.527942 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-75cgq" Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.554438 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-75cgq" Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.652787 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" event={"ID":"bcc26d7e-6d49-45a0-bad1-540d998798fb","Type":"ContainerDied","Data":"138bd3506a65b158efcac8db8b4a4e156e35275dda70086f4aeebca6d5e4b57b"} Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.652821 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-0bae-account-create-xmhsn" Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.652838 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138bd3506a65b158efcac8db8b4a4e156e35275dda70086f4aeebca6d5e4b57b" Jan 31 04:41:29 crc kubenswrapper[4931]: I0131 04:41:29.692023 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-75cgq" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.690536 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-sync-hlp86"] Jan 31 04:41:31 crc kubenswrapper[4931]: E0131 04:41:31.690883 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc26d7e-6d49-45a0-bad1-540d998798fb" containerName="mariadb-account-create" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.690898 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc26d7e-6d49-45a0-bad1-540d998798fb" containerName="mariadb-account-create" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.691057 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc26d7e-6d49-45a0-bad1-540d998798fb" containerName="mariadb-account-create" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.691681 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.698812 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.698870 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.698824 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-p4wm7" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.699105 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.699208 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-hlp86"] Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.731683 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ntl\" (UniqueName: \"kubernetes.io/projected/5122d57d-31ab-4312-a0d7-32c11806847f-kube-api-access-d9ntl\") pod \"keystone-db-sync-hlp86\" (UID: \"5122d57d-31ab-4312-a0d7-32c11806847f\") " pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.732048 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5122d57d-31ab-4312-a0d7-32c11806847f-config-data\") pod \"keystone-db-sync-hlp86\" (UID: \"5122d57d-31ab-4312-a0d7-32c11806847f\") " pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.833094 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9ntl\" (UniqueName: \"kubernetes.io/projected/5122d57d-31ab-4312-a0d7-32c11806847f-kube-api-access-d9ntl\") pod \"keystone-db-sync-hlp86\" (UID: \"5122d57d-31ab-4312-a0d7-32c11806847f\") " pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.833181 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5122d57d-31ab-4312-a0d7-32c11806847f-config-data\") pod \"keystone-db-sync-hlp86\" (UID: \"5122d57d-31ab-4312-a0d7-32c11806847f\") " pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.842646 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5122d57d-31ab-4312-a0d7-32c11806847f-config-data\") pod \"keystone-db-sync-hlp86\" (UID: \"5122d57d-31ab-4312-a0d7-32c11806847f\") " pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:31 crc kubenswrapper[4931]: I0131 04:41:31.854408 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9ntl\" (UniqueName: \"kubernetes.io/projected/5122d57d-31ab-4312-a0d7-32c11806847f-kube-api-access-d9ntl\") pod \"keystone-db-sync-hlp86\" (UID: \"5122d57d-31ab-4312-a0d7-32c11806847f\") " pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:32 crc kubenswrapper[4931]: I0131 04:41:32.016324 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:32 crc kubenswrapper[4931]: I0131 04:41:32.423701 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-hlp86"] Jan 31 04:41:32 crc kubenswrapper[4931]: I0131 04:41:32.441362 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:41:32 crc kubenswrapper[4931]: I0131 04:41:32.673169 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-hlp86" event={"ID":"5122d57d-31ab-4312-a0d7-32c11806847f","Type":"ContainerStarted","Data":"afe1c91d5a6656a00238af97e96540ca337967bbd5103dc0a777a830a2ff2e98"} Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.050012 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m"] Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.051745 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.053290 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rb6qm" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.060593 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m"] Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.097128 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-util\") pod \"08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.097221 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7nn\" (UniqueName: \"kubernetes.io/projected/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-kube-api-access-nz7nn\") pod \"08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.097294 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-bundle\") pod \"08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.198426 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz7nn\" (UniqueName: \"kubernetes.io/projected/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-kube-api-access-nz7nn\") pod \"08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.198582 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-bundle\") pod \"08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.198649 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-util\") pod \"08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.199152 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-util\") pod \"08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.200402 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-bundle\") pod \"08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.232074 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz7nn\" (UniqueName: \"kubernetes.io/projected/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-kube-api-access-nz7nn\") pod \"08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:36 crc kubenswrapper[4931]: I0131 04:41:36.380921 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.060329 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq"] Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.062655 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.062673 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq"] Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.115325 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-util\") pod \"f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.115756 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-bundle\") pod \"f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.115814 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jvcd\" (UniqueName: \"kubernetes.io/projected/eda2f66e-e58a-4521-870a-b05a8cfef2ab-kube-api-access-7jvcd\") pod \"f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.217693 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-util\") pod \"f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.217816 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-bundle\") pod \"f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.217881 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jvcd\" (UniqueName: \"kubernetes.io/projected/eda2f66e-e58a-4521-870a-b05a8cfef2ab-kube-api-access-7jvcd\") pod \"f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.219137 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-util\") pod \"f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.219516 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-bundle\") pod \"f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.236196 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jvcd\" (UniqueName: \"kubernetes.io/projected/eda2f66e-e58a-4521-870a-b05a8cfef2ab-kube-api-access-7jvcd\") pod \"f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:37 crc kubenswrapper[4931]: I0131 04:41:37.400349 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:39 crc kubenswrapper[4931]: I0131 04:41:39.728086 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-hlp86" event={"ID":"5122d57d-31ab-4312-a0d7-32c11806847f","Type":"ContainerStarted","Data":"b88df7809d6800761b70ff0f89f02ab9a612490737b24f719f851567b819c225"} Jan 31 04:41:39 crc kubenswrapper[4931]: I0131 04:41:39.749219 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-db-sync-hlp86" podStartSLOduration=1.6710932889999999 podStartE2EDuration="8.749200972s" podCreationTimestamp="2026-01-31 04:41:31 +0000 UTC" firstStartedPulling="2026-01-31 04:41:32.441057961 +0000 UTC m=+1051.250286835" lastFinishedPulling="2026-01-31 04:41:39.519165644 +0000 UTC m=+1058.328394518" observedRunningTime="2026-01-31 04:41:39.744297067 +0000 UTC m=+1058.553525951" watchObservedRunningTime="2026-01-31 04:41:39.749200972 +0000 UTC m=+1058.558429846" Jan 31 04:41:39 crc kubenswrapper[4931]: I0131 04:41:39.905404 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m"] Jan 31 04:41:39 crc kubenswrapper[4931]: W0131 04:41:39.906026 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e9000bc_caf4_4e29_9b8f_8d59434c0e3b.slice/crio-1c9b198a37f34bd58e3be2e03762bd2ec4ce6a75780f2d2f22c30aa9abd5683c WatchSource:0}: Error finding container 1c9b198a37f34bd58e3be2e03762bd2ec4ce6a75780f2d2f22c30aa9abd5683c: Status 404 returned error can't find the container with id 1c9b198a37f34bd58e3be2e03762bd2ec4ce6a75780f2d2f22c30aa9abd5683c Jan 31 04:41:39 crc kubenswrapper[4931]: I0131 04:41:39.918699 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq"] Jan 31 04:41:39 crc kubenswrapper[4931]: W0131 04:41:39.928796 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeda2f66e_e58a_4521_870a_b05a8cfef2ab.slice/crio-bd7f54c18e99495c8bdafb0bb89363c55cd44a25aa39fb36e450305e05954877 WatchSource:0}: Error finding container bd7f54c18e99495c8bdafb0bb89363c55cd44a25aa39fb36e450305e05954877: Status 404 returned error can't find the container with id bd7f54c18e99495c8bdafb0bb89363c55cd44a25aa39fb36e450305e05954877 Jan 31 04:41:40 crc kubenswrapper[4931]: I0131 04:41:40.736471 4931 generic.go:334] "Generic (PLEG): container finished" podID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerID="c0660173c1a3b60692356936b8d085c37230f25b66042911662841fe4d3a8668" exitCode=0 Jan 31 04:41:40 crc kubenswrapper[4931]: I0131 04:41:40.736555 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" event={"ID":"eda2f66e-e58a-4521-870a-b05a8cfef2ab","Type":"ContainerDied","Data":"c0660173c1a3b60692356936b8d085c37230f25b66042911662841fe4d3a8668"} Jan 31 04:41:40 crc kubenswrapper[4931]: I0131 04:41:40.736585 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" event={"ID":"eda2f66e-e58a-4521-870a-b05a8cfef2ab","Type":"ContainerStarted","Data":"bd7f54c18e99495c8bdafb0bb89363c55cd44a25aa39fb36e450305e05954877"} Jan 31 04:41:40 crc kubenswrapper[4931]: I0131 04:41:40.738164 4931 generic.go:334] "Generic (PLEG): container finished" podID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerID="80f6b26d35cd392190070aa24445ae4bebdbdc2ba9b5e1137278f88a1ec97581" exitCode=0 Jan 31 04:41:40 crc kubenswrapper[4931]: I0131 04:41:40.738240 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" event={"ID":"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b","Type":"ContainerDied","Data":"80f6b26d35cd392190070aa24445ae4bebdbdc2ba9b5e1137278f88a1ec97581"} Jan 31 04:41:40 crc kubenswrapper[4931]: I0131 04:41:40.738282 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" event={"ID":"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b","Type":"ContainerStarted","Data":"1c9b198a37f34bd58e3be2e03762bd2ec4ce6a75780f2d2f22c30aa9abd5683c"} Jan 31 04:41:41 crc kubenswrapper[4931]: I0131 04:41:41.746231 4931 generic.go:334] "Generic (PLEG): container finished" podID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerID="ec47f02fcf9b28fbea8c3fd0289e69de4c9eb51a1c37e08d6e2977b4de5f5418" exitCode=0 Jan 31 04:41:41 crc kubenswrapper[4931]: I0131 04:41:41.746304 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" event={"ID":"eda2f66e-e58a-4521-870a-b05a8cfef2ab","Type":"ContainerDied","Data":"ec47f02fcf9b28fbea8c3fd0289e69de4c9eb51a1c37e08d6e2977b4de5f5418"} Jan 31 04:41:42 crc kubenswrapper[4931]: I0131 04:41:42.757309 4931 generic.go:334] "Generic (PLEG): container finished" podID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerID="7853877f657bc9b657ce145d4ad7f2ba94f13786b4cd2b74dc2bb26425b42a91" exitCode=0 Jan 31 04:41:42 crc kubenswrapper[4931]: I0131 04:41:42.757389 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" event={"ID":"eda2f66e-e58a-4521-870a-b05a8cfef2ab","Type":"ContainerDied","Data":"7853877f657bc9b657ce145d4ad7f2ba94f13786b4cd2b74dc2bb26425b42a91"} Jan 31 04:41:42 crc kubenswrapper[4931]: I0131 04:41:42.760514 4931 generic.go:334] "Generic (PLEG): container finished" podID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerID="009fb1ce1cd8093d1d4189058de9334ec97959d7c7328e02316362377acf8487" exitCode=0 Jan 31 04:41:42 crc kubenswrapper[4931]: I0131 04:41:42.760559 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" event={"ID":"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b","Type":"ContainerDied","Data":"009fb1ce1cd8093d1d4189058de9334ec97959d7c7328e02316362377acf8487"} Jan 31 04:41:43 crc kubenswrapper[4931]: I0131 04:41:43.769585 4931 generic.go:334] "Generic (PLEG): container finished" podID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerID="ed2d2bebb80d41a7499e3d450f1c17147facfbbaa2af2ff60de52bab7c732d00" exitCode=0 Jan 31 04:41:43 crc kubenswrapper[4931]: I0131 04:41:43.769667 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" event={"ID":"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b","Type":"ContainerDied","Data":"ed2d2bebb80d41a7499e3d450f1c17147facfbbaa2af2ff60de52bab7c732d00"} Jan 31 04:41:43 crc kubenswrapper[4931]: I0131 04:41:43.774830 4931 generic.go:334] "Generic (PLEG): container finished" podID="5122d57d-31ab-4312-a0d7-32c11806847f" containerID="b88df7809d6800761b70ff0f89f02ab9a612490737b24f719f851567b819c225" exitCode=0 Jan 31 04:41:43 crc kubenswrapper[4931]: I0131 04:41:43.774859 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-hlp86" event={"ID":"5122d57d-31ab-4312-a0d7-32c11806847f","Type":"ContainerDied","Data":"b88df7809d6800761b70ff0f89f02ab9a612490737b24f719f851567b819c225"} Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.094020 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.213480 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-bundle\") pod \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.213545 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-util\") pod \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.213593 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jvcd\" (UniqueName: \"kubernetes.io/projected/eda2f66e-e58a-4521-870a-b05a8cfef2ab-kube-api-access-7jvcd\") pod \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\" (UID: \"eda2f66e-e58a-4521-870a-b05a8cfef2ab\") " Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.214602 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-bundle" (OuterVolumeSpecName: "bundle") pod "eda2f66e-e58a-4521-870a-b05a8cfef2ab" (UID: "eda2f66e-e58a-4521-870a-b05a8cfef2ab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.232004 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda2f66e-e58a-4521-870a-b05a8cfef2ab-kube-api-access-7jvcd" (OuterVolumeSpecName: "kube-api-access-7jvcd") pod "eda2f66e-e58a-4521-870a-b05a8cfef2ab" (UID: "eda2f66e-e58a-4521-870a-b05a8cfef2ab"). InnerVolumeSpecName "kube-api-access-7jvcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.247206 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-util" (OuterVolumeSpecName: "util") pod "eda2f66e-e58a-4521-870a-b05a8cfef2ab" (UID: "eda2f66e-e58a-4521-870a-b05a8cfef2ab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.316501 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jvcd\" (UniqueName: \"kubernetes.io/projected/eda2f66e-e58a-4521-870a-b05a8cfef2ab-kube-api-access-7jvcd\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.316562 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.316576 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eda2f66e-e58a-4521-870a-b05a8cfef2ab-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.783597 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" event={"ID":"eda2f66e-e58a-4521-870a-b05a8cfef2ab","Type":"ContainerDied","Data":"bd7f54c18e99495c8bdafb0bb89363c55cd44a25aa39fb36e450305e05954877"} Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.783649 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd7f54c18e99495c8bdafb0bb89363c55cd44a25aa39fb36e450305e05954877" Jan 31 04:41:44 crc kubenswrapper[4931]: I0131 04:41:44.783768 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.150349 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.158167 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.334344 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5122d57d-31ab-4312-a0d7-32c11806847f-config-data\") pod \"5122d57d-31ab-4312-a0d7-32c11806847f\" (UID: \"5122d57d-31ab-4312-a0d7-32c11806847f\") " Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.334442 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz7nn\" (UniqueName: \"kubernetes.io/projected/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-kube-api-access-nz7nn\") pod \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.334508 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9ntl\" (UniqueName: \"kubernetes.io/projected/5122d57d-31ab-4312-a0d7-32c11806847f-kube-api-access-d9ntl\") pod \"5122d57d-31ab-4312-a0d7-32c11806847f\" (UID: \"5122d57d-31ab-4312-a0d7-32c11806847f\") " Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.334543 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-util\") pod \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.334571 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-bundle\") pod \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\" (UID: \"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b\") " Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.335525 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-bundle" (OuterVolumeSpecName: "bundle") pod "3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" (UID: "3e9000bc-caf4-4e29-9b8f-8d59434c0e3b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.343267 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-kube-api-access-nz7nn" (OuterVolumeSpecName: "kube-api-access-nz7nn") pod "3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" (UID: "3e9000bc-caf4-4e29-9b8f-8d59434c0e3b"). InnerVolumeSpecName "kube-api-access-nz7nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.343377 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5122d57d-31ab-4312-a0d7-32c11806847f-kube-api-access-d9ntl" (OuterVolumeSpecName: "kube-api-access-d9ntl") pod "5122d57d-31ab-4312-a0d7-32c11806847f" (UID: "5122d57d-31ab-4312-a0d7-32c11806847f"). InnerVolumeSpecName "kube-api-access-d9ntl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.374987 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5122d57d-31ab-4312-a0d7-32c11806847f-config-data" (OuterVolumeSpecName: "config-data") pod "5122d57d-31ab-4312-a0d7-32c11806847f" (UID: "5122d57d-31ab-4312-a0d7-32c11806847f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.436504 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5122d57d-31ab-4312-a0d7-32c11806847f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.436552 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz7nn\" (UniqueName: \"kubernetes.io/projected/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-kube-api-access-nz7nn\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.436570 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9ntl\" (UniqueName: \"kubernetes.io/projected/5122d57d-31ab-4312-a0d7-32c11806847f-kube-api-access-d9ntl\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.436582 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.793951 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" event={"ID":"3e9000bc-caf4-4e29-9b8f-8d59434c0e3b","Type":"ContainerDied","Data":"1c9b198a37f34bd58e3be2e03762bd2ec4ce6a75780f2d2f22c30aa9abd5683c"} Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.793999 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c9b198a37f34bd58e3be2e03762bd2ec4ce6a75780f2d2f22c30aa9abd5683c" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.793999 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.795310 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-hlp86" event={"ID":"5122d57d-31ab-4312-a0d7-32c11806847f","Type":"ContainerDied","Data":"afe1c91d5a6656a00238af97e96540ca337967bbd5103dc0a777a830a2ff2e98"} Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.795344 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe1c91d5a6656a00238af97e96540ca337967bbd5103dc0a777a830a2ff2e98" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.795357 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-hlp86" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.824187 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-util" (OuterVolumeSpecName: "util") pod "3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" (UID: "3e9000bc-caf4-4e29-9b8f-8d59434c0e3b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:41:45 crc kubenswrapper[4931]: I0131 04:41:45.842781 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e9000bc-caf4-4e29-9b8f-8d59434c0e3b-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.048256 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-59mt8"] Jan 31 04:41:46 crc kubenswrapper[4931]: E0131 04:41:46.049263 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerName="util" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049281 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerName="util" Jan 31 04:41:46 crc kubenswrapper[4931]: E0131 04:41:46.049295 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerName="extract" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049301 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerName="extract" Jan 31 04:41:46 crc kubenswrapper[4931]: E0131 04:41:46.049322 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerName="pull" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049328 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerName="pull" Jan 31 04:41:46 crc kubenswrapper[4931]: E0131 04:41:46.049340 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerName="extract" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049346 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerName="extract" Jan 31 04:41:46 crc kubenswrapper[4931]: E0131 04:41:46.049352 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerName="pull" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049358 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerName="pull" Jan 31 04:41:46 crc kubenswrapper[4931]: E0131 04:41:46.049372 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerName="util" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049379 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerName="util" Jan 31 04:41:46 crc kubenswrapper[4931]: E0131 04:41:46.049390 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5122d57d-31ab-4312-a0d7-32c11806847f" containerName="keystone-db-sync" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049398 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5122d57d-31ab-4312-a0d7-32c11806847f" containerName="keystone-db-sync" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049584 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda2f66e-e58a-4521-870a-b05a8cfef2ab" containerName="extract" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049595 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5122d57d-31ab-4312-a0d7-32c11806847f" containerName="keystone-db-sync" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.049609 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9000bc-caf4-4e29-9b8f-8d59434c0e3b" containerName="extract" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.050379 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.054112 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.054251 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.054251 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-p4wm7" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.055617 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.063481 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-59mt8"] Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.149134 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-credential-keys\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.149213 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-config-data\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.149268 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-fernet-keys\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.149295 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-scripts\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.149324 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhv5\" (UniqueName: \"kubernetes.io/projected/4a49df57-04e5-4967-bba4-f797623943f3-kube-api-access-9lhv5\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.250573 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-config-data\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.251133 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-fernet-keys\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.251262 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-scripts\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.251474 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhv5\" (UniqueName: \"kubernetes.io/projected/4a49df57-04e5-4967-bba4-f797623943f3-kube-api-access-9lhv5\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.251692 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-credential-keys\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.255425 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-config-data\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.255586 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-fernet-keys\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.256392 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-scripts\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.263158 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-credential-keys\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.274809 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhv5\" (UniqueName: \"kubernetes.io/projected/4a49df57-04e5-4967-bba4-f797623943f3-kube-api-access-9lhv5\") pod \"keystone-bootstrap-59mt8\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.379038 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:46 crc kubenswrapper[4931]: I0131 04:41:46.807106 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-59mt8"] Jan 31 04:41:47 crc kubenswrapper[4931]: I0131 04:41:47.810297 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-59mt8" event={"ID":"4a49df57-04e5-4967-bba4-f797623943f3","Type":"ContainerStarted","Data":"bb2f39882fe27348f8c28f27de103569e8e7de56edcc737c2f912dffbfabfe43"} Jan 31 04:41:47 crc kubenswrapper[4931]: I0131 04:41:47.810627 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-59mt8" event={"ID":"4a49df57-04e5-4967-bba4-f797623943f3","Type":"ContainerStarted","Data":"0457c6afdcf35a6520d035178b0534777069804c2b6557e7e14518c3a242e41a"} Jan 31 04:41:47 crc kubenswrapper[4931]: I0131 04:41:47.828036 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-bootstrap-59mt8" podStartSLOduration=1.8280153989999999 podStartE2EDuration="1.828015399s" podCreationTimestamp="2026-01-31 04:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:41:47.827044322 +0000 UTC m=+1066.636273216" watchObservedRunningTime="2026-01-31 04:41:47.828015399 +0000 UTC m=+1066.637244283" Jan 31 04:41:50 crc kubenswrapper[4931]: I0131 04:41:50.831253 4931 generic.go:334] "Generic (PLEG): container finished" podID="4a49df57-04e5-4967-bba4-f797623943f3" containerID="bb2f39882fe27348f8c28f27de103569e8e7de56edcc737c2f912dffbfabfe43" exitCode=0 Jan 31 04:41:50 crc kubenswrapper[4931]: I0131 04:41:50.831327 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-59mt8" event={"ID":"4a49df57-04e5-4967-bba4-f797623943f3","Type":"ContainerDied","Data":"bb2f39882fe27348f8c28f27de103569e8e7de56edcc737c2f912dffbfabfe43"} Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.109898 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.231582 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-config-data\") pod \"4a49df57-04e5-4967-bba4-f797623943f3\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.231674 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-credential-keys\") pod \"4a49df57-04e5-4967-bba4-f797623943f3\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.231764 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lhv5\" (UniqueName: \"kubernetes.io/projected/4a49df57-04e5-4967-bba4-f797623943f3-kube-api-access-9lhv5\") pod \"4a49df57-04e5-4967-bba4-f797623943f3\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.231845 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-fernet-keys\") pod \"4a49df57-04e5-4967-bba4-f797623943f3\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.231892 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-scripts\") pod \"4a49df57-04e5-4967-bba4-f797623943f3\" (UID: \"4a49df57-04e5-4967-bba4-f797623943f3\") " Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.241899 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4a49df57-04e5-4967-bba4-f797623943f3" (UID: "4a49df57-04e5-4967-bba4-f797623943f3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.242698 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4a49df57-04e5-4967-bba4-f797623943f3" (UID: "4a49df57-04e5-4967-bba4-f797623943f3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.243554 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a49df57-04e5-4967-bba4-f797623943f3-kube-api-access-9lhv5" (OuterVolumeSpecName: "kube-api-access-9lhv5") pod "4a49df57-04e5-4967-bba4-f797623943f3" (UID: "4a49df57-04e5-4967-bba4-f797623943f3"). InnerVolumeSpecName "kube-api-access-9lhv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.244105 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-scripts" (OuterVolumeSpecName: "scripts") pod "4a49df57-04e5-4967-bba4-f797623943f3" (UID: "4a49df57-04e5-4967-bba4-f797623943f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.259879 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-config-data" (OuterVolumeSpecName: "config-data") pod "4a49df57-04e5-4967-bba4-f797623943f3" (UID: "4a49df57-04e5-4967-bba4-f797623943f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.335067 4931 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.335118 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.335148 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.335192 4931 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a49df57-04e5-4967-bba4-f797623943f3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.335204 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lhv5\" (UniqueName: \"kubernetes.io/projected/4a49df57-04e5-4967-bba4-f797623943f3-kube-api-access-9lhv5\") on node \"crc\" DevicePath \"\"" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.846962 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-59mt8" event={"ID":"4a49df57-04e5-4967-bba4-f797623943f3","Type":"ContainerDied","Data":"0457c6afdcf35a6520d035178b0534777069804c2b6557e7e14518c3a242e41a"} Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.847267 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0457c6afdcf35a6520d035178b0534777069804c2b6557e7e14518c3a242e41a" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.847022 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-59mt8" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.971404 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-856cb9b857-klqt6"] Jan 31 04:41:52 crc kubenswrapper[4931]: E0131 04:41:52.971924 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a49df57-04e5-4967-bba4-f797623943f3" containerName="keystone-bootstrap" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.972003 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a49df57-04e5-4967-bba4-f797623943f3" containerName="keystone-bootstrap" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.972205 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a49df57-04e5-4967-bba4-f797623943f3" containerName="keystone-bootstrap" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.972762 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.974639 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.975585 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.976341 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.986599 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-p4wm7" Jan 31 04:41:52 crc kubenswrapper[4931]: I0131 04:41:52.996530 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-856cb9b857-klqt6"] Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.149048 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbl4d\" (UniqueName: \"kubernetes.io/projected/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-kube-api-access-dbl4d\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.149132 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-credential-keys\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.149183 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-fernet-keys\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.149233 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-scripts\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.149284 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-config-data\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.253557 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbl4d\" (UniqueName: \"kubernetes.io/projected/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-kube-api-access-dbl4d\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.253640 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-credential-keys\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.253674 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-fernet-keys\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.253710 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-scripts\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.253762 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-config-data\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.259549 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-fernet-keys\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.260583 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-config-data\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.261126 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-scripts\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.268115 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-credential-keys\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.271618 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbl4d\" (UniqueName: \"kubernetes.io/projected/be9d3e91-53c4-4d15-aaa5-3ff67279dc3f-kube-api-access-dbl4d\") pod \"keystone-856cb9b857-klqt6\" (UID: \"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f\") " pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.287307 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.721597 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-856cb9b857-klqt6"] Jan 31 04:41:53 crc kubenswrapper[4931]: I0131 04:41:53.855471 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" event={"ID":"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f","Type":"ContainerStarted","Data":"08d038302ee9018993ca78ade28743eea1017c24fe1a968e48129d007a8175e5"} Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.605055 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk"] Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.606356 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.607790 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-wjjl8" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.609756 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.618653 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk"] Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.773904 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42s2\" (UniqueName: \"kubernetes.io/projected/bbadb100-e582-46cd-9460-6bf083b2f53e-kube-api-access-l42s2\") pod \"swift-operator-controller-manager-f997d59bd-m9rvk\" (UID: \"bbadb100-e582-46cd-9460-6bf083b2f53e\") " pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.774011 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbadb100-e582-46cd-9460-6bf083b2f53e-apiservice-cert\") pod \"swift-operator-controller-manager-f997d59bd-m9rvk\" (UID: \"bbadb100-e582-46cd-9460-6bf083b2f53e\") " pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.774062 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbadb100-e582-46cd-9460-6bf083b2f53e-webhook-cert\") pod \"swift-operator-controller-manager-f997d59bd-m9rvk\" (UID: \"bbadb100-e582-46cd-9460-6bf083b2f53e\") " pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.864240 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" event={"ID":"be9d3e91-53c4-4d15-aaa5-3ff67279dc3f","Type":"ContainerStarted","Data":"56e9ae918dc2d35b0acb28f8cd0101ac1ad0ca512da3b8e63c4165f6fed68606"} Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.864315 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.875249 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbadb100-e582-46cd-9460-6bf083b2f53e-webhook-cert\") pod \"swift-operator-controller-manager-f997d59bd-m9rvk\" (UID: \"bbadb100-e582-46cd-9460-6bf083b2f53e\") " pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.875346 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l42s2\" (UniqueName: \"kubernetes.io/projected/bbadb100-e582-46cd-9460-6bf083b2f53e-kube-api-access-l42s2\") pod \"swift-operator-controller-manager-f997d59bd-m9rvk\" (UID: \"bbadb100-e582-46cd-9460-6bf083b2f53e\") " pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.875408 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbadb100-e582-46cd-9460-6bf083b2f53e-apiservice-cert\") pod \"swift-operator-controller-manager-f997d59bd-m9rvk\" (UID: \"bbadb100-e582-46cd-9460-6bf083b2f53e\") " pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.881712 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbadb100-e582-46cd-9460-6bf083b2f53e-apiservice-cert\") pod \"swift-operator-controller-manager-f997d59bd-m9rvk\" (UID: \"bbadb100-e582-46cd-9460-6bf083b2f53e\") " pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.892920 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbadb100-e582-46cd-9460-6bf083b2f53e-webhook-cert\") pod \"swift-operator-controller-manager-f997d59bd-m9rvk\" (UID: \"bbadb100-e582-46cd-9460-6bf083b2f53e\") " pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.900370 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" podStartSLOduration=2.900353954 podStartE2EDuration="2.900353954s" podCreationTimestamp="2026-01-31 04:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:41:54.895931191 +0000 UTC m=+1073.705160065" watchObservedRunningTime="2026-01-31 04:41:54.900353954 +0000 UTC m=+1073.709582828" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.907261 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l42s2\" (UniqueName: \"kubernetes.io/projected/bbadb100-e582-46cd-9460-6bf083b2f53e-kube-api-access-l42s2\") pod \"swift-operator-controller-manager-f997d59bd-m9rvk\" (UID: \"bbadb100-e582-46cd-9460-6bf083b2f53e\") " pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:54 crc kubenswrapper[4931]: I0131 04:41:54.924732 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:41:55 crc kubenswrapper[4931]: I0131 04:41:55.388268 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk"] Jan 31 04:41:55 crc kubenswrapper[4931]: I0131 04:41:55.874446 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" event={"ID":"bbadb100-e582-46cd-9460-6bf083b2f53e","Type":"ContainerStarted","Data":"f1a18deae794221e2d13c5feff4f7a0c9e5d2a9978c8162ea0af593ac69ddd66"} Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.038396 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9"] Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.040647 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.044388 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-service-cert" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.044442 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bb57j" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.055663 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9"] Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.175577 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlpj7\" (UniqueName: \"kubernetes.io/projected/f5716144-7d9f-4472-926b-eee4337385fd-kube-api-access-rlpj7\") pod \"horizon-operator-controller-manager-69b9d97bb7-8w7m9\" (UID: \"f5716144-7d9f-4472-926b-eee4337385fd\") " pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.176093 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f5716144-7d9f-4472-926b-eee4337385fd-webhook-cert\") pod \"horizon-operator-controller-manager-69b9d97bb7-8w7m9\" (UID: \"f5716144-7d9f-4472-926b-eee4337385fd\") " pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.176153 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f5716144-7d9f-4472-926b-eee4337385fd-apiservice-cert\") pod \"horizon-operator-controller-manager-69b9d97bb7-8w7m9\" (UID: \"f5716144-7d9f-4472-926b-eee4337385fd\") " pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.277612 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f5716144-7d9f-4472-926b-eee4337385fd-webhook-cert\") pod \"horizon-operator-controller-manager-69b9d97bb7-8w7m9\" (UID: \"f5716144-7d9f-4472-926b-eee4337385fd\") " pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.277679 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f5716144-7d9f-4472-926b-eee4337385fd-apiservice-cert\") pod \"horizon-operator-controller-manager-69b9d97bb7-8w7m9\" (UID: \"f5716144-7d9f-4472-926b-eee4337385fd\") " pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.277764 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlpj7\" (UniqueName: \"kubernetes.io/projected/f5716144-7d9f-4472-926b-eee4337385fd-kube-api-access-rlpj7\") pod \"horizon-operator-controller-manager-69b9d97bb7-8w7m9\" (UID: \"f5716144-7d9f-4472-926b-eee4337385fd\") " pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.288492 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f5716144-7d9f-4472-926b-eee4337385fd-apiservice-cert\") pod \"horizon-operator-controller-manager-69b9d97bb7-8w7m9\" (UID: \"f5716144-7d9f-4472-926b-eee4337385fd\") " pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.290759 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f5716144-7d9f-4472-926b-eee4337385fd-webhook-cert\") pod \"horizon-operator-controller-manager-69b9d97bb7-8w7m9\" (UID: \"f5716144-7d9f-4472-926b-eee4337385fd\") " pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.304609 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlpj7\" (UniqueName: \"kubernetes.io/projected/f5716144-7d9f-4472-926b-eee4337385fd-kube-api-access-rlpj7\") pod \"horizon-operator-controller-manager-69b9d97bb7-8w7m9\" (UID: \"f5716144-7d9f-4472-926b-eee4337385fd\") " pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.365773 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.937581 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" event={"ID":"bbadb100-e582-46cd-9460-6bf083b2f53e","Type":"ContainerStarted","Data":"968eb2845d75e87ce6681173dca5923c28e17166cceccf8727d48cb8e9823dce"} Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.938154 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.938176 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" event={"ID":"bbadb100-e582-46cd-9460-6bf083b2f53e","Type":"ContainerStarted","Data":"cbe3897a1221fd425eecd1e3c0ef4249d5d6622359b246522697f17650d2ea16"} Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.954028 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9"] Jan 31 04:42:01 crc kubenswrapper[4931]: W0131 04:42:01.958213 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5716144_7d9f_4472_926b_eee4337385fd.slice/crio-5b084c51e1ed202cc1b8df681efa3004abec315c4cadc1d78e23fab46a6c5ea2 WatchSource:0}: Error finding container 5b084c51e1ed202cc1b8df681efa3004abec315c4cadc1d78e23fab46a6c5ea2: Status 404 returned error can't find the container with id 5b084c51e1ed202cc1b8df681efa3004abec315c4cadc1d78e23fab46a6c5ea2 Jan 31 04:42:01 crc kubenswrapper[4931]: I0131 04:42:01.968238 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" podStartSLOduration=2.248592317 podStartE2EDuration="7.968198203s" podCreationTimestamp="2026-01-31 04:41:54 +0000 UTC" firstStartedPulling="2026-01-31 04:41:55.39489333 +0000 UTC m=+1074.204122204" lastFinishedPulling="2026-01-31 04:42:01.114499216 +0000 UTC m=+1079.923728090" observedRunningTime="2026-01-31 04:42:01.966612509 +0000 UTC m=+1080.775841403" watchObservedRunningTime="2026-01-31 04:42:01.968198203 +0000 UTC m=+1080.777427077" Jan 31 04:42:02 crc kubenswrapper[4931]: I0131 04:42:02.954873 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" event={"ID":"f5716144-7d9f-4472-926b-eee4337385fd","Type":"ContainerStarted","Data":"5b084c51e1ed202cc1b8df681efa3004abec315c4cadc1d78e23fab46a6c5ea2"} Jan 31 04:42:03 crc kubenswrapper[4931]: I0131 04:42:03.963343 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" event={"ID":"f5716144-7d9f-4472-926b-eee4337385fd","Type":"ContainerStarted","Data":"544fff1950523db06d2f56fc455f20fc9588b421c652a84424faec96605fef75"} Jan 31 04:42:04 crc kubenswrapper[4931]: I0131 04:42:04.973148 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" event={"ID":"f5716144-7d9f-4472-926b-eee4337385fd","Type":"ContainerStarted","Data":"5d939a37bdf7567448defb24bebfd873706db3fa01a227360e45b15b40c9e222"} Jan 31 04:42:04 crc kubenswrapper[4931]: I0131 04:42:04.973914 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:05 crc kubenswrapper[4931]: I0131 04:42:05.004628 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" podStartSLOduration=2.237707104 podStartE2EDuration="4.00460413s" podCreationTimestamp="2026-01-31 04:42:01 +0000 UTC" firstStartedPulling="2026-01-31 04:42:01.962041883 +0000 UTC m=+1080.771270757" lastFinishedPulling="2026-01-31 04:42:03.728938909 +0000 UTC m=+1082.538167783" observedRunningTime="2026-01-31 04:42:05.000955429 +0000 UTC m=+1083.810184353" watchObservedRunningTime="2026-01-31 04:42:05.00460413 +0000 UTC m=+1083.813833064" Jan 31 04:42:11 crc kubenswrapper[4931]: I0131 04:42:11.371060 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-69b9d97bb7-8w7m9" Jan 31 04:42:14 crc kubenswrapper[4931]: I0131 04:42:14.928486 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-f997d59bd-m9rvk" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.385044 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.389969 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.392888 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-swift-dockercfg-cl2lf" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.393858 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-conf" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.395307 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-files" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.397318 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-storage-config-data" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.414918 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.476023 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.476105 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.476139 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0355d163-55e9-4ac4-8dd4-081e9a637aaf-cache\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.476156 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q99fw\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-kube-api-access-q99fw\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.476196 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0355d163-55e9-4ac4-8dd4-081e9a637aaf-lock\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.577867 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.578371 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.578565 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0355d163-55e9-4ac4-8dd4-081e9a637aaf-cache\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: E0131 04:42:20.578087 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:20 crc kubenswrapper[4931]: E0131 04:42:20.578714 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.578760 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.578683 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q99fw\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-kube-api-access-q99fw\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: E0131 04:42:20.578807 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift podName:0355d163-55e9-4ac4-8dd4-081e9a637aaf nodeName:}" failed. No retries permitted until 2026-01-31 04:42:21.078784984 +0000 UTC m=+1099.888013858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift") pod "swift-storage-0" (UID: "0355d163-55e9-4ac4-8dd4-081e9a637aaf") : configmap "swift-ring-files" not found Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.578888 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0355d163-55e9-4ac4-8dd4-081e9a637aaf-lock\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.579756 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0355d163-55e9-4ac4-8dd4-081e9a637aaf-lock\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.579771 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0355d163-55e9-4ac4-8dd4-081e9a637aaf-cache\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.600526 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q99fw\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-kube-api-access-q99fw\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:20 crc kubenswrapper[4931]: I0131 04:42:20.612950 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:21 crc kubenswrapper[4931]: I0131 04:42:21.085813 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:21 crc kubenswrapper[4931]: E0131 04:42:21.086017 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:21 crc kubenswrapper[4931]: E0131 04:42:21.086233 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:42:21 crc kubenswrapper[4931]: E0131 04:42:21.086291 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift podName:0355d163-55e9-4ac4-8dd4-081e9a637aaf nodeName:}" failed. No retries permitted until 2026-01-31 04:42:22.08627258 +0000 UTC m=+1100.895501454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift") pod "swift-storage-0" (UID: "0355d163-55e9-4ac4-8dd4-081e9a637aaf") : configmap "swift-ring-files" not found Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.012640 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-x98b6"] Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.013563 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-x98b6" Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.016131 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-index-dockercfg-kjsp6" Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.024736 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-x98b6"] Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.103213 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4994f\" (UniqueName: \"kubernetes.io/projected/7a9610db-1a62-45aa-9fab-78b244123115-kube-api-access-4994f\") pod \"glance-operator-index-x98b6\" (UID: \"7a9610db-1a62-45aa-9fab-78b244123115\") " pod="openstack-operators/glance-operator-index-x98b6" Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.103293 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:22 crc kubenswrapper[4931]: E0131 04:42:22.103508 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:22 crc kubenswrapper[4931]: E0131 04:42:22.103522 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:42:22 crc kubenswrapper[4931]: E0131 04:42:22.103559 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift podName:0355d163-55e9-4ac4-8dd4-081e9a637aaf nodeName:}" failed. No retries permitted until 2026-01-31 04:42:24.103545548 +0000 UTC m=+1102.912774412 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift") pod "swift-storage-0" (UID: "0355d163-55e9-4ac4-8dd4-081e9a637aaf") : configmap "swift-ring-files" not found Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.204878 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4994f\" (UniqueName: \"kubernetes.io/projected/7a9610db-1a62-45aa-9fab-78b244123115-kube-api-access-4994f\") pod \"glance-operator-index-x98b6\" (UID: \"7a9610db-1a62-45aa-9fab-78b244123115\") " pod="openstack-operators/glance-operator-index-x98b6" Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.228680 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4994f\" (UniqueName: \"kubernetes.io/projected/7a9610db-1a62-45aa-9fab-78b244123115-kube-api-access-4994f\") pod \"glance-operator-index-x98b6\" (UID: \"7a9610db-1a62-45aa-9fab-78b244123115\") " pod="openstack-operators/glance-operator-index-x98b6" Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.333259 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-x98b6" Jan 31 04:42:22 crc kubenswrapper[4931]: I0131 04:42:22.768004 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-x98b6"] Jan 31 04:42:22 crc kubenswrapper[4931]: W0131 04:42:22.771160 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a9610db_1a62_45aa_9fab_78b244123115.slice/crio-c65d6659cd505467e9bc3db88c654693fbaac1c75ea317ed1a0118970e5b8cb4 WatchSource:0}: Error finding container c65d6659cd505467e9bc3db88c654693fbaac1c75ea317ed1a0118970e5b8cb4: Status 404 returned error can't find the container with id c65d6659cd505467e9bc3db88c654693fbaac1c75ea317ed1a0118970e5b8cb4 Jan 31 04:42:23 crc kubenswrapper[4931]: I0131 04:42:23.231229 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-x98b6" event={"ID":"7a9610db-1a62-45aa-9fab-78b244123115","Type":"ContainerStarted","Data":"c65d6659cd505467e9bc3db88c654693fbaac1c75ea317ed1a0118970e5b8cb4"} Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.129337 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:24 crc kubenswrapper[4931]: E0131 04:42:24.129549 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:24 crc kubenswrapper[4931]: E0131 04:42:24.129761 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:42:24 crc kubenswrapper[4931]: E0131 04:42:24.129822 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift podName:0355d163-55e9-4ac4-8dd4-081e9a637aaf nodeName:}" failed. No retries permitted until 2026-01-31 04:42:28.129803327 +0000 UTC m=+1106.939032211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift") pod "swift-storage-0" (UID: "0355d163-55e9-4ac4-8dd4-081e9a637aaf") : configmap "swift-ring-files" not found Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.459032 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-mpf44"] Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.460958 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.462965 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-config-data" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.463369 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-scripts" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.463651 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-proxy-config-data" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.472548 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-mpf44"] Jan 31 04:42:24 crc kubenswrapper[4931]: E0131 04:42:24.478175 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dispersionconf etc-swift kube-api-access-72rwj ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[dispersionconf etc-swift kube-api-access-72rwj ring-data-devices scripts swiftconf]: context canceled" pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" podUID="3403c6bf-b935-49c8-8239-244241a92edb" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.498311 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-trn87"] Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.499457 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.511468 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-trn87"] Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.519763 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-mpf44"] Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636030 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rwj\" (UniqueName: \"kubernetes.io/projected/3403c6bf-b935-49c8-8239-244241a92edb-kube-api-access-72rwj\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636120 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fpn\" (UniqueName: \"kubernetes.io/projected/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-kube-api-access-75fpn\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636200 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-scripts\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636240 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3403c6bf-b935-49c8-8239-244241a92edb-etc-swift\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636259 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-dispersionconf\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636312 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-swiftconf\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636333 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-ring-data-devices\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636389 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-scripts\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636412 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-swiftconf\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636443 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-ring-data-devices\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636465 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-dispersionconf\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.636691 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-etc-swift\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.722277 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/keystone-856cb9b857-klqt6" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738321 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-etc-swift\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738421 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rwj\" (UniqueName: \"kubernetes.io/projected/3403c6bf-b935-49c8-8239-244241a92edb-kube-api-access-72rwj\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738492 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75fpn\" (UniqueName: \"kubernetes.io/projected/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-kube-api-access-75fpn\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738544 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-scripts\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738573 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3403c6bf-b935-49c8-8239-244241a92edb-etc-swift\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738599 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-dispersionconf\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738649 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-swiftconf\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738680 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-ring-data-devices\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738752 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-scripts\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738785 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-swiftconf\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738838 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-ring-data-devices\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.738865 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-dispersionconf\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.739917 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-etc-swift\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.741005 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-scripts\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.741223 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3403c6bf-b935-49c8-8239-244241a92edb-etc-swift\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.741301 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-scripts\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.741407 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-ring-data-devices\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.741712 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-ring-data-devices\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.745600 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-dispersionconf\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.745599 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-swiftconf\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.746606 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-dispersionconf\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.752959 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-swiftconf\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.765276 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rwj\" (UniqueName: \"kubernetes.io/projected/3403c6bf-b935-49c8-8239-244241a92edb-kube-api-access-72rwj\") pod \"swift-ring-rebalance-mpf44\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.765560 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75fpn\" (UniqueName: \"kubernetes.io/projected/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-kube-api-access-75fpn\") pod \"swift-ring-rebalance-trn87\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:24 crc kubenswrapper[4931]: I0131 04:42:24.821997 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.244312 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.254436 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.347301 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-ring-data-devices\") pod \"3403c6bf-b935-49c8-8239-244241a92edb\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.347419 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-swiftconf\") pod \"3403c6bf-b935-49c8-8239-244241a92edb\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.347464 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3403c6bf-b935-49c8-8239-244241a92edb-etc-swift\") pod \"3403c6bf-b935-49c8-8239-244241a92edb\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.347487 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72rwj\" (UniqueName: \"kubernetes.io/projected/3403c6bf-b935-49c8-8239-244241a92edb-kube-api-access-72rwj\") pod \"3403c6bf-b935-49c8-8239-244241a92edb\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.347522 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-dispersionconf\") pod \"3403c6bf-b935-49c8-8239-244241a92edb\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.347620 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-scripts\") pod \"3403c6bf-b935-49c8-8239-244241a92edb\" (UID: \"3403c6bf-b935-49c8-8239-244241a92edb\") " Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.347780 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3403c6bf-b935-49c8-8239-244241a92edb" (UID: "3403c6bf-b935-49c8-8239-244241a92edb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.347867 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3403c6bf-b935-49c8-8239-244241a92edb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3403c6bf-b935-49c8-8239-244241a92edb" (UID: "3403c6bf-b935-49c8-8239-244241a92edb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.348035 4931 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.348071 4931 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3403c6bf-b935-49c8-8239-244241a92edb-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.348233 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-scripts" (OuterVolumeSpecName: "scripts") pod "3403c6bf-b935-49c8-8239-244241a92edb" (UID: "3403c6bf-b935-49c8-8239-244241a92edb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.352445 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3403c6bf-b935-49c8-8239-244241a92edb" (UID: "3403c6bf-b935-49c8-8239-244241a92edb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.352649 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3403c6bf-b935-49c8-8239-244241a92edb" (UID: "3403c6bf-b935-49c8-8239-244241a92edb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.352752 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3403c6bf-b935-49c8-8239-244241a92edb-kube-api-access-72rwj" (OuterVolumeSpecName: "kube-api-access-72rwj") pod "3403c6bf-b935-49c8-8239-244241a92edb" (UID: "3403c6bf-b935-49c8-8239-244241a92edb"). InnerVolumeSpecName "kube-api-access-72rwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.449657 4931 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.449710 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72rwj\" (UniqueName: \"kubernetes.io/projected/3403c6bf-b935-49c8-8239-244241a92edb-kube-api-access-72rwj\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.449742 4931 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3403c6bf-b935-49c8-8239-244241a92edb-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:25 crc kubenswrapper[4931]: I0131 04:42:25.449757 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3403c6bf-b935-49c8-8239-244241a92edb-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:26 crc kubenswrapper[4931]: I0131 04:42:26.209280 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-x98b6"] Jan 31 04:42:26 crc kubenswrapper[4931]: I0131 04:42:26.250620 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-mpf44" Jan 31 04:42:26 crc kubenswrapper[4931]: I0131 04:42:26.316438 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-mpf44"] Jan 31 04:42:26 crc kubenswrapper[4931]: I0131 04:42:26.321802 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-mpf44"] Jan 31 04:42:26 crc kubenswrapper[4931]: I0131 04:42:26.817026 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-pqm86"] Jan 31 04:42:26 crc kubenswrapper[4931]: I0131 04:42:26.818610 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-pqm86" Jan 31 04:42:26 crc kubenswrapper[4931]: I0131 04:42:26.825938 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-pqm86"] Jan 31 04:42:26 crc kubenswrapper[4931]: I0131 04:42:26.899705 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-trn87"] Jan 31 04:42:26 crc kubenswrapper[4931]: I0131 04:42:26.980136 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94s5m\" (UniqueName: \"kubernetes.io/projected/f332e601-2a6d-46f5-9196-20d3cefa107f-kube-api-access-94s5m\") pod \"glance-operator-index-pqm86\" (UID: \"f332e601-2a6d-46f5-9196-20d3cefa107f\") " pod="openstack-operators/glance-operator-index-pqm86" Jan 31 04:42:27 crc kubenswrapper[4931]: I0131 04:42:27.081302 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94s5m\" (UniqueName: \"kubernetes.io/projected/f332e601-2a6d-46f5-9196-20d3cefa107f-kube-api-access-94s5m\") pod \"glance-operator-index-pqm86\" (UID: \"f332e601-2a6d-46f5-9196-20d3cefa107f\") " pod="openstack-operators/glance-operator-index-pqm86" Jan 31 04:42:27 crc kubenswrapper[4931]: I0131 04:42:27.102408 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94s5m\" (UniqueName: \"kubernetes.io/projected/f332e601-2a6d-46f5-9196-20d3cefa107f-kube-api-access-94s5m\") pod \"glance-operator-index-pqm86\" (UID: \"f332e601-2a6d-46f5-9196-20d3cefa107f\") " pod="openstack-operators/glance-operator-index-pqm86" Jan 31 04:42:27 crc kubenswrapper[4931]: I0131 04:42:27.147983 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-pqm86" Jan 31 04:42:27 crc kubenswrapper[4931]: W0131 04:42:27.487225 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfe33f21_0c4a_4efe_a9f5_9cb3b71568c3.slice/crio-802bd22ff8a9862a927ee6c252fff7774a23305eb70227ced597b0bc02137930 WatchSource:0}: Error finding container 802bd22ff8a9862a927ee6c252fff7774a23305eb70227ced597b0bc02137930: Status 404 returned error can't find the container with id 802bd22ff8a9862a927ee6c252fff7774a23305eb70227ced597b0bc02137930 Jan 31 04:42:27 crc kubenswrapper[4931]: I0131 04:42:27.906610 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3403c6bf-b935-49c8-8239-244241a92edb" path="/var/lib/kubelet/pods/3403c6bf-b935-49c8-8239-244241a92edb/volumes" Jan 31 04:42:28 crc kubenswrapper[4931]: I0131 04:42:28.197527 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:28 crc kubenswrapper[4931]: E0131 04:42:28.197750 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:28 crc kubenswrapper[4931]: E0131 04:42:28.197770 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:42:28 crc kubenswrapper[4931]: E0131 04:42:28.197831 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift podName:0355d163-55e9-4ac4-8dd4-081e9a637aaf nodeName:}" failed. No retries permitted until 2026-01-31 04:42:36.197812651 +0000 UTC m=+1115.007041525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift") pod "swift-storage-0" (UID: "0355d163-55e9-4ac4-8dd4-081e9a637aaf") : configmap "swift-ring-files" not found Jan 31 04:42:28 crc kubenswrapper[4931]: I0131 04:42:28.263068 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-trn87" event={"ID":"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3","Type":"ContainerStarted","Data":"802bd22ff8a9862a927ee6c252fff7774a23305eb70227ced597b0bc02137930"} Jan 31 04:42:29 crc kubenswrapper[4931]: I0131 04:42:29.091851 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-pqm86"] Jan 31 04:42:29 crc kubenswrapper[4931]: W0131 04:42:29.095400 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf332e601_2a6d_46f5_9196_20d3cefa107f.slice/crio-adfc5b05f2dbfd6e7f6b07a8bf758c019c37a9627eb9c5689008ff213a9b5540 WatchSource:0}: Error finding container adfc5b05f2dbfd6e7f6b07a8bf758c019c37a9627eb9c5689008ff213a9b5540: Status 404 returned error can't find the container with id adfc5b05f2dbfd6e7f6b07a8bf758c019c37a9627eb9c5689008ff213a9b5540 Jan 31 04:42:29 crc kubenswrapper[4931]: I0131 04:42:29.270227 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-pqm86" event={"ID":"f332e601-2a6d-46f5-9196-20d3cefa107f","Type":"ContainerStarted","Data":"adfc5b05f2dbfd6e7f6b07a8bf758c019c37a9627eb9c5689008ff213a9b5540"} Jan 31 04:42:31 crc kubenswrapper[4931]: I0131 04:42:31.292280 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-pqm86" event={"ID":"f332e601-2a6d-46f5-9196-20d3cefa107f","Type":"ContainerStarted","Data":"9573be4674d752114fa42eeb1f4b2d95e6c24d24cd3840b533e060074797cc91"} Jan 31 04:42:31 crc kubenswrapper[4931]: I0131 04:42:31.296187 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-x98b6" event={"ID":"7a9610db-1a62-45aa-9fab-78b244123115","Type":"ContainerStarted","Data":"be66f8c139d5abe53516f62a33ffb027ab78a9933b286ca0369442a9d96d90a6"} Jan 31 04:42:31 crc kubenswrapper[4931]: I0131 04:42:31.296320 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/glance-operator-index-x98b6" podUID="7a9610db-1a62-45aa-9fab-78b244123115" containerName="registry-server" containerID="cri-o://be66f8c139d5abe53516f62a33ffb027ab78a9933b286ca0369442a9d96d90a6" gracePeriod=2 Jan 31 04:42:31 crc kubenswrapper[4931]: I0131 04:42:31.313957 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-index-pqm86" podStartSLOduration=4.623493026 podStartE2EDuration="5.313938409s" podCreationTimestamp="2026-01-31 04:42:26 +0000 UTC" firstStartedPulling="2026-01-31 04:42:29.097260024 +0000 UTC m=+1107.906488898" lastFinishedPulling="2026-01-31 04:42:29.787705407 +0000 UTC m=+1108.596934281" observedRunningTime="2026-01-31 04:42:31.306085283 +0000 UTC m=+1110.115314157" watchObservedRunningTime="2026-01-31 04:42:31.313938409 +0000 UTC m=+1110.123167283" Jan 31 04:42:31 crc kubenswrapper[4931]: I0131 04:42:31.321952 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-index-x98b6" podStartSLOduration=3.315634408 podStartE2EDuration="10.32193631s" podCreationTimestamp="2026-01-31 04:42:21 +0000 UTC" firstStartedPulling="2026-01-31 04:42:22.773838257 +0000 UTC m=+1101.583067131" lastFinishedPulling="2026-01-31 04:42:29.780140139 +0000 UTC m=+1108.589369033" observedRunningTime="2026-01-31 04:42:31.318369612 +0000 UTC m=+1110.127598496" watchObservedRunningTime="2026-01-31 04:42:31.32193631 +0000 UTC m=+1110.131165184" Jan 31 04:42:32 crc kubenswrapper[4931]: I0131 04:42:32.309417 4931 generic.go:334] "Generic (PLEG): container finished" podID="7a9610db-1a62-45aa-9fab-78b244123115" containerID="be66f8c139d5abe53516f62a33ffb027ab78a9933b286ca0369442a9d96d90a6" exitCode=0 Jan 31 04:42:32 crc kubenswrapper[4931]: I0131 04:42:32.309501 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-x98b6" event={"ID":"7a9610db-1a62-45aa-9fab-78b244123115","Type":"ContainerDied","Data":"be66f8c139d5abe53516f62a33ffb027ab78a9933b286ca0369442a9d96d90a6"} Jan 31 04:42:32 crc kubenswrapper[4931]: I0131 04:42:32.334249 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-index-x98b6" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.375549 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-x98b6" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.408448 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-proxy-5957d6665c-7659x"] Jan 31 04:42:33 crc kubenswrapper[4931]: E0131 04:42:33.408768 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9610db-1a62-45aa-9fab-78b244123115" containerName="registry-server" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.408985 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9610db-1a62-45aa-9fab-78b244123115" containerName="registry-server" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.409111 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9610db-1a62-45aa-9fab-78b244123115" containerName="registry-server" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.409842 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.439848 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-5957d6665c-7659x"] Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.473269 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4994f\" (UniqueName: \"kubernetes.io/projected/7a9610db-1a62-45aa-9fab-78b244123115-kube-api-access-4994f\") pod \"7a9610db-1a62-45aa-9fab-78b244123115\" (UID: \"7a9610db-1a62-45aa-9fab-78b244123115\") " Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.482862 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9610db-1a62-45aa-9fab-78b244123115-kube-api-access-4994f" (OuterVolumeSpecName: "kube-api-access-4994f") pod "7a9610db-1a62-45aa-9fab-78b244123115" (UID: "7a9610db-1a62-45aa-9fab-78b244123115"). InnerVolumeSpecName "kube-api-access-4994f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.575654 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4899d9b-4958-48d0-bec1-f13bc66b49a5-log-httpd\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.575766 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4899d9b-4958-48d0-bec1-f13bc66b49a5-config-data\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.575793 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.575867 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx589\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-kube-api-access-cx589\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.575900 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4899d9b-4958-48d0-bec1-f13bc66b49a5-run-httpd\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.575995 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4994f\" (UniqueName: \"kubernetes.io/projected/7a9610db-1a62-45aa-9fab-78b244123115-kube-api-access-4994f\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.676931 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4899d9b-4958-48d0-bec1-f13bc66b49a5-config-data\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.676990 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.677063 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx589\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-kube-api-access-cx589\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.677090 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4899d9b-4958-48d0-bec1-f13bc66b49a5-run-httpd\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.677156 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4899d9b-4958-48d0-bec1-f13bc66b49a5-log-httpd\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: E0131 04:42:33.677281 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:33 crc kubenswrapper[4931]: E0131 04:42:33.677312 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-5957d6665c-7659x: configmap "swift-ring-files" not found Jan 31 04:42:33 crc kubenswrapper[4931]: E0131 04:42:33.677382 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift podName:e4899d9b-4958-48d0-bec1-f13bc66b49a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:42:34.177354187 +0000 UTC m=+1112.986583061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift") pod "swift-proxy-5957d6665c-7659x" (UID: "e4899d9b-4958-48d0-bec1-f13bc66b49a5") : configmap "swift-ring-files" not found Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.678300 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4899d9b-4958-48d0-bec1-f13bc66b49a5-log-httpd\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.678647 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4899d9b-4958-48d0-bec1-f13bc66b49a5-run-httpd\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.683492 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4899d9b-4958-48d0-bec1-f13bc66b49a5-config-data\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:33 crc kubenswrapper[4931]: I0131 04:42:33.700537 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx589\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-kube-api-access-cx589\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:34 crc kubenswrapper[4931]: I0131 04:42:34.185613 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:34 crc kubenswrapper[4931]: E0131 04:42:34.185868 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:34 crc kubenswrapper[4931]: E0131 04:42:34.185919 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-5957d6665c-7659x: configmap "swift-ring-files" not found Jan 31 04:42:34 crc kubenswrapper[4931]: E0131 04:42:34.185998 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift podName:e4899d9b-4958-48d0-bec1-f13bc66b49a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:42:35.185975091 +0000 UTC m=+1113.995203975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift") pod "swift-proxy-5957d6665c-7659x" (UID: "e4899d9b-4958-48d0-bec1-f13bc66b49a5") : configmap "swift-ring-files" not found Jan 31 04:42:34 crc kubenswrapper[4931]: I0131 04:42:34.327436 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-trn87" event={"ID":"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3","Type":"ContainerStarted","Data":"5d3d7a3cb34dc374559df312b41f231f5f05b46985ecfcfcd3396882160a23e1"} Jan 31 04:42:34 crc kubenswrapper[4931]: I0131 04:42:34.329088 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-x98b6" event={"ID":"7a9610db-1a62-45aa-9fab-78b244123115","Type":"ContainerDied","Data":"c65d6659cd505467e9bc3db88c654693fbaac1c75ea317ed1a0118970e5b8cb4"} Jan 31 04:42:34 crc kubenswrapper[4931]: I0131 04:42:34.329162 4931 scope.go:117] "RemoveContainer" containerID="be66f8c139d5abe53516f62a33ffb027ab78a9933b286ca0369442a9d96d90a6" Jan 31 04:42:34 crc kubenswrapper[4931]: I0131 04:42:34.329110 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-x98b6" Jan 31 04:42:34 crc kubenswrapper[4931]: I0131 04:42:34.349969 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-ring-rebalance-trn87" podStartSLOduration=3.888303393 podStartE2EDuration="10.349923648s" podCreationTimestamp="2026-01-31 04:42:24 +0000 UTC" firstStartedPulling="2026-01-31 04:42:27.4907574 +0000 UTC m=+1106.299986274" lastFinishedPulling="2026-01-31 04:42:33.952377655 +0000 UTC m=+1112.761606529" observedRunningTime="2026-01-31 04:42:34.345461006 +0000 UTC m=+1113.154689880" watchObservedRunningTime="2026-01-31 04:42:34.349923648 +0000 UTC m=+1113.159152522" Jan 31 04:42:34 crc kubenswrapper[4931]: I0131 04:42:34.372909 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-x98b6"] Jan 31 04:42:34 crc kubenswrapper[4931]: I0131 04:42:34.381586 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/glance-operator-index-x98b6"] Jan 31 04:42:35 crc kubenswrapper[4931]: I0131 04:42:35.202101 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:35 crc kubenswrapper[4931]: E0131 04:42:35.202309 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:35 crc kubenswrapper[4931]: E0131 04:42:35.202323 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-5957d6665c-7659x: configmap "swift-ring-files" not found Jan 31 04:42:35 crc kubenswrapper[4931]: E0131 04:42:35.202380 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift podName:e4899d9b-4958-48d0-bec1-f13bc66b49a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:42:37.202362346 +0000 UTC m=+1116.011591220 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift") pod "swift-proxy-5957d6665c-7659x" (UID: "e4899d9b-4958-48d0-bec1-f13bc66b49a5") : configmap "swift-ring-files" not found Jan 31 04:42:35 crc kubenswrapper[4931]: I0131 04:42:35.912014 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9610db-1a62-45aa-9fab-78b244123115" path="/var/lib/kubelet/pods/7a9610db-1a62-45aa-9fab-78b244123115/volumes" Jan 31 04:42:36 crc kubenswrapper[4931]: I0131 04:42:36.217582 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:36 crc kubenswrapper[4931]: E0131 04:42:36.217750 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:36 crc kubenswrapper[4931]: E0131 04:42:36.218085 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:42:36 crc kubenswrapper[4931]: E0131 04:42:36.218144 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift podName:0355d163-55e9-4ac4-8dd4-081e9a637aaf nodeName:}" failed. No retries permitted until 2026-01-31 04:42:52.218123982 +0000 UTC m=+1131.027352856 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift") pod "swift-storage-0" (UID: "0355d163-55e9-4ac4-8dd4-081e9a637aaf") : configmap "swift-ring-files" not found Jan 31 04:42:37 crc kubenswrapper[4931]: I0131 04:42:37.148273 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-index-pqm86" Jan 31 04:42:37 crc kubenswrapper[4931]: I0131 04:42:37.148328 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/glance-operator-index-pqm86" Jan 31 04:42:37 crc kubenswrapper[4931]: I0131 04:42:37.178311 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/glance-operator-index-pqm86" Jan 31 04:42:37 crc kubenswrapper[4931]: I0131 04:42:37.232940 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:37 crc kubenswrapper[4931]: E0131 04:42:37.233202 4931 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:42:37 crc kubenswrapper[4931]: E0131 04:42:37.233244 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-5957d6665c-7659x: configmap "swift-ring-files" not found Jan 31 04:42:37 crc kubenswrapper[4931]: E0131 04:42:37.233350 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift podName:e4899d9b-4958-48d0-bec1-f13bc66b49a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:42:41.233316553 +0000 UTC m=+1120.042545467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift") pod "swift-proxy-5957d6665c-7659x" (UID: "e4899d9b-4958-48d0-bec1-f13bc66b49a5") : configmap "swift-ring-files" not found Jan 31 04:42:37 crc kubenswrapper[4931]: I0131 04:42:37.386338 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-index-pqm86" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.252865 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w"] Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.254710 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.257452 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rb6qm" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.273305 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w"] Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.389575 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t454z\" (UniqueName: \"kubernetes.io/projected/b97c9738-62e4-4623-8974-f8625930a8a5-kube-api-access-t454z\") pod \"c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.389635 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-bundle\") pod \"c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.389676 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-util\") pod \"c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.491191 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t454z\" (UniqueName: \"kubernetes.io/projected/b97c9738-62e4-4623-8974-f8625930a8a5-kube-api-access-t454z\") pod \"c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.491259 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-bundle\") pod \"c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.491298 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-util\") pod \"c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.491741 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-bundle\") pod \"c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.491809 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-util\") pod \"c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.510976 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t454z\" (UniqueName: \"kubernetes.io/projected/b97c9738-62e4-4623-8974-f8625930a8a5-kube-api-access-t454z\") pod \"c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:40 crc kubenswrapper[4931]: I0131 04:42:40.572762 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:41 crc kubenswrapper[4931]: I0131 04:42:41.046960 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w"] Jan 31 04:42:41 crc kubenswrapper[4931]: I0131 04:42:41.303237 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:41 crc kubenswrapper[4931]: I0131 04:42:41.313251 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e4899d9b-4958-48d0-bec1-f13bc66b49a5-etc-swift\") pod \"swift-proxy-5957d6665c-7659x\" (UID: \"e4899d9b-4958-48d0-bec1-f13bc66b49a5\") " pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:41 crc kubenswrapper[4931]: I0131 04:42:41.377127 4931 generic.go:334] "Generic (PLEG): container finished" podID="b97c9738-62e4-4623-8974-f8625930a8a5" containerID="a758c23d6d6e7b4a7d71b69174539ffa3204af6b55e794bba0dd27b80064a25d" exitCode=0 Jan 31 04:42:41 crc kubenswrapper[4931]: I0131 04:42:41.377819 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" event={"ID":"b97c9738-62e4-4623-8974-f8625930a8a5","Type":"ContainerDied","Data":"a758c23d6d6e7b4a7d71b69174539ffa3204af6b55e794bba0dd27b80064a25d"} Jan 31 04:42:41 crc kubenswrapper[4931]: I0131 04:42:41.377850 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" event={"ID":"b97c9738-62e4-4623-8974-f8625930a8a5","Type":"ContainerStarted","Data":"35bbd1a8d94a70994018412150907689365aef624ac522d6589f4dd9f2c95df7"} Jan 31 04:42:41 crc kubenswrapper[4931]: I0131 04:42:41.381442 4931 generic.go:334] "Generic (PLEG): container finished" podID="cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" containerID="5d3d7a3cb34dc374559df312b41f231f5f05b46985ecfcfcd3396882160a23e1" exitCode=0 Jan 31 04:42:41 crc kubenswrapper[4931]: I0131 04:42:41.381472 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-trn87" event={"ID":"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3","Type":"ContainerDied","Data":"5d3d7a3cb34dc374559df312b41f231f5f05b46985ecfcfcd3396882160a23e1"} Jan 31 04:42:41 crc kubenswrapper[4931]: I0131 04:42:41.525125 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.272060 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-5957d6665c-7659x"] Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.389757 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" event={"ID":"e4899d9b-4958-48d0-bec1-f13bc66b49a5","Type":"ContainerStarted","Data":"7c2aa63ca01def875084030668859fe0ad491874d2c06c324850c5639263d6eb"} Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.779426 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.925706 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-ring-data-devices\") pod \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.926452 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-scripts\") pod \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.926560 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-etc-swift\") pod \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.926787 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-swiftconf\") pod \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.926952 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-dispersionconf\") pod \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.927071 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75fpn\" (UniqueName: \"kubernetes.io/projected/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-kube-api-access-75fpn\") pod \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\" (UID: \"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3\") " Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.927479 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" (UID: "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.927790 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" (UID: "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.931124 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-kube-api-access-75fpn" (OuterVolumeSpecName: "kube-api-access-75fpn") pod "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" (UID: "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3"). InnerVolumeSpecName "kube-api-access-75fpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.934446 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" (UID: "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.950860 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" (UID: "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:42:42 crc kubenswrapper[4931]: I0131 04:42:42.951241 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-scripts" (OuterVolumeSpecName: "scripts") pod "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" (UID: "cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.029602 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.029663 4931 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.029683 4931 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.029700 4931 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.029743 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75fpn\" (UniqueName: \"kubernetes.io/projected/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-kube-api-access-75fpn\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.029769 4931 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.397970 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" event={"ID":"e4899d9b-4958-48d0-bec1-f13bc66b49a5","Type":"ContainerStarted","Data":"d7bca2719e8168e225a8623502549ce78fa3a75fad20210bc2013572f6ae8252"} Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.398012 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" event={"ID":"e4899d9b-4958-48d0-bec1-f13bc66b49a5","Type":"ContainerStarted","Data":"67d04d897bbc9765137abd2883392c5a305a1e6df8eed3f82ba50badd5ac4618"} Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.398697 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.398760 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.401661 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" event={"ID":"b97c9738-62e4-4623-8974-f8625930a8a5","Type":"ContainerStarted","Data":"23fe3180d2c6cc8a7d41a72ac9dc0fafd09b5d310bd1c03517ae0dc1f0cc35bf"} Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.403171 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-trn87" event={"ID":"cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3","Type":"ContainerDied","Data":"802bd22ff8a9862a927ee6c252fff7774a23305eb70227ced597b0bc02137930"} Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.403198 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="802bd22ff8a9862a927ee6c252fff7774a23305eb70227ced597b0bc02137930" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.403197 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-trn87" Jan 31 04:42:43 crc kubenswrapper[4931]: I0131 04:42:43.439070 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" podStartSLOduration=10.439049877 podStartE2EDuration="10.439049877s" podCreationTimestamp="2026-01-31 04:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:42:43.41700711 +0000 UTC m=+1122.226235984" watchObservedRunningTime="2026-01-31 04:42:43.439049877 +0000 UTC m=+1122.248278751" Jan 31 04:42:44 crc kubenswrapper[4931]: I0131 04:42:44.416785 4931 generic.go:334] "Generic (PLEG): container finished" podID="b97c9738-62e4-4623-8974-f8625930a8a5" containerID="23fe3180d2c6cc8a7d41a72ac9dc0fafd09b5d310bd1c03517ae0dc1f0cc35bf" exitCode=0 Jan 31 04:42:44 crc kubenswrapper[4931]: I0131 04:42:44.416963 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" event={"ID":"b97c9738-62e4-4623-8974-f8625930a8a5","Type":"ContainerDied","Data":"23fe3180d2c6cc8a7d41a72ac9dc0fafd09b5d310bd1c03517ae0dc1f0cc35bf"} Jan 31 04:42:45 crc kubenswrapper[4931]: I0131 04:42:45.427076 4931 generic.go:334] "Generic (PLEG): container finished" podID="b97c9738-62e4-4623-8974-f8625930a8a5" containerID="eada58055f81490cecc1657d800c42af1664889b4936e79997b519be0f770671" exitCode=0 Jan 31 04:42:45 crc kubenswrapper[4931]: I0131 04:42:45.427185 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" event={"ID":"b97c9738-62e4-4623-8974-f8625930a8a5","Type":"ContainerDied","Data":"eada58055f81490cecc1657d800c42af1664889b4936e79997b519be0f770671"} Jan 31 04:42:46 crc kubenswrapper[4931]: I0131 04:42:46.776622 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:46 crc kubenswrapper[4931]: I0131 04:42:46.900013 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t454z\" (UniqueName: \"kubernetes.io/projected/b97c9738-62e4-4623-8974-f8625930a8a5-kube-api-access-t454z\") pod \"b97c9738-62e4-4623-8974-f8625930a8a5\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " Jan 31 04:42:46 crc kubenswrapper[4931]: I0131 04:42:46.900151 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-bundle\") pod \"b97c9738-62e4-4623-8974-f8625930a8a5\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " Jan 31 04:42:46 crc kubenswrapper[4931]: I0131 04:42:46.900190 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-util\") pod \"b97c9738-62e4-4623-8974-f8625930a8a5\" (UID: \"b97c9738-62e4-4623-8974-f8625930a8a5\") " Jan 31 04:42:46 crc kubenswrapper[4931]: I0131 04:42:46.900830 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-bundle" (OuterVolumeSpecName: "bundle") pod "b97c9738-62e4-4623-8974-f8625930a8a5" (UID: "b97c9738-62e4-4623-8974-f8625930a8a5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:42:46 crc kubenswrapper[4931]: I0131 04:42:46.907007 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97c9738-62e4-4623-8974-f8625930a8a5-kube-api-access-t454z" (OuterVolumeSpecName: "kube-api-access-t454z") pod "b97c9738-62e4-4623-8974-f8625930a8a5" (UID: "b97c9738-62e4-4623-8974-f8625930a8a5"). InnerVolumeSpecName "kube-api-access-t454z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:42:47 crc kubenswrapper[4931]: I0131 04:42:47.001940 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:47 crc kubenswrapper[4931]: I0131 04:42:47.001994 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t454z\" (UniqueName: \"kubernetes.io/projected/b97c9738-62e4-4623-8974-f8625930a8a5-kube-api-access-t454z\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:47 crc kubenswrapper[4931]: I0131 04:42:47.212369 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-util" (OuterVolumeSpecName: "util") pod "b97c9738-62e4-4623-8974-f8625930a8a5" (UID: "b97c9738-62e4-4623-8974-f8625930a8a5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:42:47 crc kubenswrapper[4931]: I0131 04:42:47.306785 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b97c9738-62e4-4623-8974-f8625930a8a5-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:47 crc kubenswrapper[4931]: I0131 04:42:47.441757 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" event={"ID":"b97c9738-62e4-4623-8974-f8625930a8a5","Type":"ContainerDied","Data":"35bbd1a8d94a70994018412150907689365aef624ac522d6589f4dd9f2c95df7"} Jan 31 04:42:47 crc kubenswrapper[4931]: I0131 04:42:47.442014 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35bbd1a8d94a70994018412150907689365aef624ac522d6589f4dd9f2c95df7" Jan 31 04:42:47 crc kubenswrapper[4931]: I0131 04:42:47.441804 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w" Jan 31 04:42:51 crc kubenswrapper[4931]: I0131 04:42:51.132874 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:42:51 crc kubenswrapper[4931]: I0131 04:42:51.133548 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:42:51 crc kubenswrapper[4931]: I0131 04:42:51.528498 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:51 crc kubenswrapper[4931]: I0131 04:42:51.529442 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-5957d6665c-7659x" Jan 31 04:42:52 crc kubenswrapper[4931]: I0131 04:42:52.280011 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:52 crc kubenswrapper[4931]: I0131 04:42:52.302284 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0355d163-55e9-4ac4-8dd4-081e9a637aaf-etc-swift\") pod \"swift-storage-0\" (UID: \"0355d163-55e9-4ac4-8dd4-081e9a637aaf\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:52 crc kubenswrapper[4931]: I0131 04:42:52.506326 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 04:42:52 crc kubenswrapper[4931]: I0131 04:42:52.948765 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 04:42:52 crc kubenswrapper[4931]: W0131 04:42:52.955392 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0355d163_55e9_4ac4_8dd4_081e9a637aaf.slice/crio-ac40d3791b2e007821e36117f84b7c407ad22b3b8c69ac7f3b7c59163ae8a580 WatchSource:0}: Error finding container ac40d3791b2e007821e36117f84b7c407ad22b3b8c69ac7f3b7c59163ae8a580: Status 404 returned error can't find the container with id ac40d3791b2e007821e36117f84b7c407ad22b3b8c69ac7f3b7c59163ae8a580 Jan 31 04:42:53 crc kubenswrapper[4931]: I0131 04:42:53.482373 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"ac40d3791b2e007821e36117f84b7c407ad22b3b8c69ac7f3b7c59163ae8a580"} Jan 31 04:42:54 crc kubenswrapper[4931]: I0131 04:42:54.501449 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"67a38874c38f9e1787eea51d973656b7c5e6f7005772b0614093dfbf2940c51d"} Jan 31 04:42:55 crc kubenswrapper[4931]: I0131 04:42:55.512959 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"fe25ec20b8c7147b71e5842df03f87d5aca71cdcce253c6c28c8a852574ad705"} Jan 31 04:42:55 crc kubenswrapper[4931]: I0131 04:42:55.513000 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"88588e8a59503b12de60f655c956337956cbd83d6d7f584786213c67fc5dddf3"} Jan 31 04:42:55 crc kubenswrapper[4931]: I0131 04:42:55.513012 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"3f4a541322e187fde8d7aad2ed1a4ab0466c135843282a64a63aa03afa8afd06"} Jan 31 04:42:56 crc kubenswrapper[4931]: I0131 04:42:56.522957 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"731c6d05794c4c9d1a9838a26893956347a54dcd0f02519c8309eb1bbcd40cbb"} Jan 31 04:42:56 crc kubenswrapper[4931]: I0131 04:42:56.523513 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"04389e26fc92770ada16942c2878190dc4455d6faa0428127ed7b0d496833189"} Jan 31 04:42:56 crc kubenswrapper[4931]: I0131 04:42:56.523524 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"711b69a467f96d125ec4c013b61f2d28c7eece10d924574498372810780353fe"} Jan 31 04:42:57 crc kubenswrapper[4931]: I0131 04:42:57.534178 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"16ac5e0411172912fdf8214fe170c0a338de8c87052b44f9f64fbc20c38c2dd3"} Jan 31 04:42:58 crc kubenswrapper[4931]: I0131 04:42:58.545548 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"27d7543a1bddae1942097211e5316f3a2d6379ad9c7c952a7104c3e31f8d8b5c"} Jan 31 04:42:58 crc kubenswrapper[4931]: I0131 04:42:58.545602 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"fde30ce94504fe5ac081c59badc8b85599727512c9cd6900aeb54109c2c8fc35"} Jan 31 04:42:59 crc kubenswrapper[4931]: I0131 04:42:59.560335 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"71460fec49955a9a0d00a8f9044a6e49c5cccd3a0f057766fbd57ecff09b6d62"} Jan 31 04:42:59 crc kubenswrapper[4931]: I0131 04:42:59.560671 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"7d939c4e21fe70e4e88cfe93eb9f9586887f8d0582df7f3d30091cdb940b0066"} Jan 31 04:42:59 crc kubenswrapper[4931]: I0131 04:42:59.560686 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"b5e45aef23ffa50b24c426590326012b0fbc9ee6b7b9e1c91be8f24e55a4f479"} Jan 31 04:42:59 crc kubenswrapper[4931]: I0131 04:42:59.560697 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"e5870fbf9f6af9bf8b6612d3aad3451630e3e92e114950af29e7d0184c663c69"} Jan 31 04:42:59 crc kubenswrapper[4931]: I0131 04:42:59.560708 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0355d163-55e9-4ac4-8dd4-081e9a637aaf","Type":"ContainerStarted","Data":"9f21231a19d241fe80c527b08f82479466f770eb0ed3b966752dbd9ffe72e864"} Jan 31 04:42:59 crc kubenswrapper[4931]: I0131 04:42:59.595964 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-storage-0" podStartSLOduration=35.332345687 podStartE2EDuration="40.595713296s" podCreationTimestamp="2026-01-31 04:42:19 +0000 UTC" firstStartedPulling="2026-01-31 04:42:52.957692781 +0000 UTC m=+1131.766921655" lastFinishedPulling="2026-01-31 04:42:58.22106039 +0000 UTC m=+1137.030289264" observedRunningTime="2026-01-31 04:42:59.589480044 +0000 UTC m=+1138.398708928" watchObservedRunningTime="2026-01-31 04:42:59.595713296 +0000 UTC m=+1138.404942170" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.552561 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5"] Jan 31 04:43:00 crc kubenswrapper[4931]: E0131 04:43:00.552858 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97c9738-62e4-4623-8974-f8625930a8a5" containerName="pull" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.552870 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97c9738-62e4-4623-8974-f8625930a8a5" containerName="pull" Jan 31 04:43:00 crc kubenswrapper[4931]: E0131 04:43:00.552886 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97c9738-62e4-4623-8974-f8625930a8a5" containerName="extract" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.552891 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97c9738-62e4-4623-8974-f8625930a8a5" containerName="extract" Jan 31 04:43:00 crc kubenswrapper[4931]: E0131 04:43:00.552903 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" containerName="swift-ring-rebalance" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.552909 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" containerName="swift-ring-rebalance" Jan 31 04:43:00 crc kubenswrapper[4931]: E0131 04:43:00.552922 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97c9738-62e4-4623-8974-f8625930a8a5" containerName="util" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.552928 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97c9738-62e4-4623-8974-f8625930a8a5" containerName="util" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.553056 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97c9738-62e4-4623-8974-f8625930a8a5" containerName="extract" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.553072 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3" containerName="swift-ring-rebalance" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.553755 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.555378 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-service-cert" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.556956 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-k2zrg" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.569794 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5"] Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.600834 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4310cc9-8307-46fa-92f5-79c101f3535d-webhook-cert\") pod \"glance-operator-controller-manager-6b769874f6-2fll5\" (UID: \"a4310cc9-8307-46fa-92f5-79c101f3535d\") " pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.600878 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4310cc9-8307-46fa-92f5-79c101f3535d-apiservice-cert\") pod \"glance-operator-controller-manager-6b769874f6-2fll5\" (UID: \"a4310cc9-8307-46fa-92f5-79c101f3535d\") " pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.600984 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfs7l\" (UniqueName: \"kubernetes.io/projected/a4310cc9-8307-46fa-92f5-79c101f3535d-kube-api-access-sfs7l\") pod \"glance-operator-controller-manager-6b769874f6-2fll5\" (UID: \"a4310cc9-8307-46fa-92f5-79c101f3535d\") " pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.702301 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfs7l\" (UniqueName: \"kubernetes.io/projected/a4310cc9-8307-46fa-92f5-79c101f3535d-kube-api-access-sfs7l\") pod \"glance-operator-controller-manager-6b769874f6-2fll5\" (UID: \"a4310cc9-8307-46fa-92f5-79c101f3535d\") " pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.702356 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4310cc9-8307-46fa-92f5-79c101f3535d-webhook-cert\") pod \"glance-operator-controller-manager-6b769874f6-2fll5\" (UID: \"a4310cc9-8307-46fa-92f5-79c101f3535d\") " pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.702378 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4310cc9-8307-46fa-92f5-79c101f3535d-apiservice-cert\") pod \"glance-operator-controller-manager-6b769874f6-2fll5\" (UID: \"a4310cc9-8307-46fa-92f5-79c101f3535d\") " pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.707884 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4310cc9-8307-46fa-92f5-79c101f3535d-apiservice-cert\") pod \"glance-operator-controller-manager-6b769874f6-2fll5\" (UID: \"a4310cc9-8307-46fa-92f5-79c101f3535d\") " pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.714485 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4310cc9-8307-46fa-92f5-79c101f3535d-webhook-cert\") pod \"glance-operator-controller-manager-6b769874f6-2fll5\" (UID: \"a4310cc9-8307-46fa-92f5-79c101f3535d\") " pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.718810 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfs7l\" (UniqueName: \"kubernetes.io/projected/a4310cc9-8307-46fa-92f5-79c101f3535d-kube-api-access-sfs7l\") pod \"glance-operator-controller-manager-6b769874f6-2fll5\" (UID: \"a4310cc9-8307-46fa-92f5-79c101f3535d\") " pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:00 crc kubenswrapper[4931]: I0131 04:43:00.871989 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:01 crc kubenswrapper[4931]: I0131 04:43:01.283841 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5"] Jan 31 04:43:01 crc kubenswrapper[4931]: W0131 04:43:01.295911 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4310cc9_8307_46fa_92f5_79c101f3535d.slice/crio-43340ced9fbe29ddcbca33c4d736538f3edf0c3156e82661878117c6265e2ec1 WatchSource:0}: Error finding container 43340ced9fbe29ddcbca33c4d736538f3edf0c3156e82661878117c6265e2ec1: Status 404 returned error can't find the container with id 43340ced9fbe29ddcbca33c4d736538f3edf0c3156e82661878117c6265e2ec1 Jan 31 04:43:01 crc kubenswrapper[4931]: I0131 04:43:01.576447 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" event={"ID":"a4310cc9-8307-46fa-92f5-79c101f3535d","Type":"ContainerStarted","Data":"43340ced9fbe29ddcbca33c4d736538f3edf0c3156e82661878117c6265e2ec1"} Jan 31 04:43:02 crc kubenswrapper[4931]: I0131 04:43:02.584075 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" event={"ID":"a4310cc9-8307-46fa-92f5-79c101f3535d","Type":"ContainerStarted","Data":"cb791c907788c84e77a96aff69fb24d145d84e7067394cc540ebf471b507a383"} Jan 31 04:43:03 crc kubenswrapper[4931]: I0131 04:43:03.592773 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" event={"ID":"a4310cc9-8307-46fa-92f5-79c101f3535d","Type":"ContainerStarted","Data":"c7316d47dde6d58cb2fdb4e39d8652f8048804ff367d457a2d10c701cc7b33c7"} Jan 31 04:43:03 crc kubenswrapper[4931]: I0131 04:43:03.593220 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:03 crc kubenswrapper[4931]: I0131 04:43:03.616470 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" podStartSLOduration=1.9367127659999999 podStartE2EDuration="3.616445878s" podCreationTimestamp="2026-01-31 04:43:00 +0000 UTC" firstStartedPulling="2026-01-31 04:43:01.297823643 +0000 UTC m=+1140.107052517" lastFinishedPulling="2026-01-31 04:43:02.977556755 +0000 UTC m=+1141.786785629" observedRunningTime="2026-01-31 04:43:03.608706125 +0000 UTC m=+1142.417934999" watchObservedRunningTime="2026-01-31 04:43:03.616445878 +0000 UTC m=+1142.425674772" Jan 31 04:43:10 crc kubenswrapper[4931]: I0131 04:43:10.877315 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6b769874f6-2fll5" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.627511 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.631113 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.650209 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.650389 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-qp85r" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.650438 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.651930 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.662374 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.718421 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvp42\" (UniqueName: \"kubernetes.io/projected/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-kube-api-access-lvp42\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.718524 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.718547 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.718583 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-scripts\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.820535 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-scripts\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.820925 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvp42\" (UniqueName: \"kubernetes.io/projected/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-kube-api-access-lvp42\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.821135 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.821263 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.822673 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-scripts\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.826445 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.830466 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.840488 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvp42\" (UniqueName: \"kubernetes.io/projected/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-kube-api-access-lvp42\") pod \"openstackclient\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:14 crc kubenswrapper[4931]: I0131 04:43:14.973813 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.396443 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 04:43:15 crc kubenswrapper[4931]: W0131 04:43:15.402528 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3c38ee8_fa37_45d3_ac6b_b2e055f0a312.slice/crio-626ff6c403e1ea1f10692c5a82946fb3ada104a3422cd34c3c79dd993674aa33 WatchSource:0}: Error finding container 626ff6c403e1ea1f10692c5a82946fb3ada104a3422cd34c3c79dd993674aa33: Status 404 returned error can't find the container with id 626ff6c403e1ea1f10692c5a82946fb3ada104a3422cd34c3c79dd993674aa33 Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.453048 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-lmht8"] Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.453936 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-lmht8" Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.460541 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-lmht8"] Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.532181 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ddk\" (UniqueName: \"kubernetes.io/projected/2d97ba54-c088-4920-b129-4e932187d239-kube-api-access-82ddk\") pod \"glance-db-create-lmht8\" (UID: \"2d97ba54-c088-4920-b129-4e932187d239\") " pod="glance-kuttl-tests/glance-db-create-lmht8" Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.633782 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ddk\" (UniqueName: \"kubernetes.io/projected/2d97ba54-c088-4920-b129-4e932187d239-kube-api-access-82ddk\") pod \"glance-db-create-lmht8\" (UID: \"2d97ba54-c088-4920-b129-4e932187d239\") " pod="glance-kuttl-tests/glance-db-create-lmht8" Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.650537 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ddk\" (UniqueName: \"kubernetes.io/projected/2d97ba54-c088-4920-b129-4e932187d239-kube-api-access-82ddk\") pod \"glance-db-create-lmht8\" (UID: \"2d97ba54-c088-4920-b129-4e932187d239\") " pod="glance-kuttl-tests/glance-db-create-lmht8" Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.770530 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-lmht8" Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.789065 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312","Type":"ContainerStarted","Data":"626ff6c403e1ea1f10692c5a82946fb3ada104a3422cd34c3c79dd993674aa33"} Jan 31 04:43:15 crc kubenswrapper[4931]: I0131 04:43:15.987936 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-lmht8"] Jan 31 04:43:16 crc kubenswrapper[4931]: I0131 04:43:16.816406 4931 generic.go:334] "Generic (PLEG): container finished" podID="2d97ba54-c088-4920-b129-4e932187d239" containerID="365ca5691b44da07dd10e84670168714df6f5f8e3d528f6e700fd346892466f5" exitCode=0 Jan 31 04:43:16 crc kubenswrapper[4931]: I0131 04:43:16.816456 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-lmht8" event={"ID":"2d97ba54-c088-4920-b129-4e932187d239","Type":"ContainerDied","Data":"365ca5691b44da07dd10e84670168714df6f5f8e3d528f6e700fd346892466f5"} Jan 31 04:43:16 crc kubenswrapper[4931]: I0131 04:43:16.816511 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-lmht8" event={"ID":"2d97ba54-c088-4920-b129-4e932187d239","Type":"ContainerStarted","Data":"caef54a2b81a5372a30b5a7da54f4b276f5b8b69090c32edc70a9e4de9fa0625"} Jan 31 04:43:20 crc kubenswrapper[4931]: I0131 04:43:20.076618 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-lmht8" Jan 31 04:43:20 crc kubenswrapper[4931]: I0131 04:43:20.201941 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82ddk\" (UniqueName: \"kubernetes.io/projected/2d97ba54-c088-4920-b129-4e932187d239-kube-api-access-82ddk\") pod \"2d97ba54-c088-4920-b129-4e932187d239\" (UID: \"2d97ba54-c088-4920-b129-4e932187d239\") " Jan 31 04:43:20 crc kubenswrapper[4931]: I0131 04:43:20.207961 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d97ba54-c088-4920-b129-4e932187d239-kube-api-access-82ddk" (OuterVolumeSpecName: "kube-api-access-82ddk") pod "2d97ba54-c088-4920-b129-4e932187d239" (UID: "2d97ba54-c088-4920-b129-4e932187d239"). InnerVolumeSpecName "kube-api-access-82ddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:20 crc kubenswrapper[4931]: I0131 04:43:20.303644 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82ddk\" (UniqueName: \"kubernetes.io/projected/2d97ba54-c088-4920-b129-4e932187d239-kube-api-access-82ddk\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:20 crc kubenswrapper[4931]: I0131 04:43:20.848593 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-lmht8" event={"ID":"2d97ba54-c088-4920-b129-4e932187d239","Type":"ContainerDied","Data":"caef54a2b81a5372a30b5a7da54f4b276f5b8b69090c32edc70a9e4de9fa0625"} Jan 31 04:43:20 crc kubenswrapper[4931]: I0131 04:43:20.848836 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caef54a2b81a5372a30b5a7da54f4b276f5b8b69090c32edc70a9e4de9fa0625" Jan 31 04:43:20 crc kubenswrapper[4931]: I0131 04:43:20.848742 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-lmht8" Jan 31 04:43:21 crc kubenswrapper[4931]: I0131 04:43:21.133492 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:43:21 crc kubenswrapper[4931]: I0131 04:43:21.133593 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:43:24 crc kubenswrapper[4931]: I0131 04:43:24.882356 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312","Type":"ContainerStarted","Data":"3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b"} Jan 31 04:43:24 crc kubenswrapper[4931]: I0131 04:43:24.905307 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=1.973478917 podStartE2EDuration="10.905290062s" podCreationTimestamp="2026-01-31 04:43:14 +0000 UTC" firstStartedPulling="2026-01-31 04:43:15.404798078 +0000 UTC m=+1154.214026952" lastFinishedPulling="2026-01-31 04:43:24.336609223 +0000 UTC m=+1163.145838097" observedRunningTime="2026-01-31 04:43:24.902535166 +0000 UTC m=+1163.711764040" watchObservedRunningTime="2026-01-31 04:43:24.905290062 +0000 UTC m=+1163.714518936" Jan 31 04:43:25 crc kubenswrapper[4931]: I0131 04:43:25.686209 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-4b63-account-create-kffdx"] Jan 31 04:43:25 crc kubenswrapper[4931]: E0131 04:43:25.686543 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d97ba54-c088-4920-b129-4e932187d239" containerName="mariadb-database-create" Jan 31 04:43:25 crc kubenswrapper[4931]: I0131 04:43:25.686559 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d97ba54-c088-4920-b129-4e932187d239" containerName="mariadb-database-create" Jan 31 04:43:25 crc kubenswrapper[4931]: I0131 04:43:25.686751 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d97ba54-c088-4920-b129-4e932187d239" containerName="mariadb-database-create" Jan 31 04:43:25 crc kubenswrapper[4931]: I0131 04:43:25.687289 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" Jan 31 04:43:25 crc kubenswrapper[4931]: I0131 04:43:25.689632 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 04:43:25 crc kubenswrapper[4931]: I0131 04:43:25.705433 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-4b63-account-create-kffdx"] Jan 31 04:43:25 crc kubenswrapper[4931]: I0131 04:43:25.794494 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8gk9\" (UniqueName: \"kubernetes.io/projected/7b2268cb-de42-4a78-84d1-9ae16eee5535-kube-api-access-b8gk9\") pod \"glance-4b63-account-create-kffdx\" (UID: \"7b2268cb-de42-4a78-84d1-9ae16eee5535\") " pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" Jan 31 04:43:25 crc kubenswrapper[4931]: I0131 04:43:25.896193 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8gk9\" (UniqueName: \"kubernetes.io/projected/7b2268cb-de42-4a78-84d1-9ae16eee5535-kube-api-access-b8gk9\") pod \"glance-4b63-account-create-kffdx\" (UID: \"7b2268cb-de42-4a78-84d1-9ae16eee5535\") " pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" Jan 31 04:43:25 crc kubenswrapper[4931]: I0131 04:43:25.916255 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8gk9\" (UniqueName: \"kubernetes.io/projected/7b2268cb-de42-4a78-84d1-9ae16eee5535-kube-api-access-b8gk9\") pod \"glance-4b63-account-create-kffdx\" (UID: \"7b2268cb-de42-4a78-84d1-9ae16eee5535\") " pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" Jan 31 04:43:26 crc kubenswrapper[4931]: I0131 04:43:26.011896 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" Jan 31 04:43:26 crc kubenswrapper[4931]: I0131 04:43:26.423779 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-4b63-account-create-kffdx"] Jan 31 04:43:26 crc kubenswrapper[4931]: I0131 04:43:26.898862 4931 generic.go:334] "Generic (PLEG): container finished" podID="7b2268cb-de42-4a78-84d1-9ae16eee5535" containerID="a92e1254a118dba3fe07ecdbb36184ae4544e69d1535bcd7c7f4ed6b4c4329a1" exitCode=0 Jan 31 04:43:26 crc kubenswrapper[4931]: I0131 04:43:26.898918 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" event={"ID":"7b2268cb-de42-4a78-84d1-9ae16eee5535","Type":"ContainerDied","Data":"a92e1254a118dba3fe07ecdbb36184ae4544e69d1535bcd7c7f4ed6b4c4329a1"} Jan 31 04:43:26 crc kubenswrapper[4931]: I0131 04:43:26.898951 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" event={"ID":"7b2268cb-de42-4a78-84d1-9ae16eee5535","Type":"ContainerStarted","Data":"ab32afa433f4ac1678413d27af8a09af845f00efcfbf7f49ffc5e767307f4d83"} Jan 31 04:43:28 crc kubenswrapper[4931]: I0131 04:43:28.181194 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" Jan 31 04:43:28 crc kubenswrapper[4931]: I0131 04:43:28.242578 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8gk9\" (UniqueName: \"kubernetes.io/projected/7b2268cb-de42-4a78-84d1-9ae16eee5535-kube-api-access-b8gk9\") pod \"7b2268cb-de42-4a78-84d1-9ae16eee5535\" (UID: \"7b2268cb-de42-4a78-84d1-9ae16eee5535\") " Jan 31 04:43:28 crc kubenswrapper[4931]: I0131 04:43:28.247651 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2268cb-de42-4a78-84d1-9ae16eee5535-kube-api-access-b8gk9" (OuterVolumeSpecName: "kube-api-access-b8gk9") pod "7b2268cb-de42-4a78-84d1-9ae16eee5535" (UID: "7b2268cb-de42-4a78-84d1-9ae16eee5535"). InnerVolumeSpecName "kube-api-access-b8gk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:28 crc kubenswrapper[4931]: I0131 04:43:28.344485 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8gk9\" (UniqueName: \"kubernetes.io/projected/7b2268cb-de42-4a78-84d1-9ae16eee5535-kube-api-access-b8gk9\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:28 crc kubenswrapper[4931]: I0131 04:43:28.914356 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" event={"ID":"7b2268cb-de42-4a78-84d1-9ae16eee5535","Type":"ContainerDied","Data":"ab32afa433f4ac1678413d27af8a09af845f00efcfbf7f49ffc5e767307f4d83"} Jan 31 04:43:28 crc kubenswrapper[4931]: I0131 04:43:28.914395 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab32afa433f4ac1678413d27af8a09af845f00efcfbf7f49ffc5e767307f4d83" Jan 31 04:43:28 crc kubenswrapper[4931]: I0131 04:43:28.914416 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-4b63-account-create-kffdx" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.755154 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-8tdkn"] Jan 31 04:43:30 crc kubenswrapper[4931]: E0131 04:43:30.755838 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2268cb-de42-4a78-84d1-9ae16eee5535" containerName="mariadb-account-create" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.755853 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2268cb-de42-4a78-84d1-9ae16eee5535" containerName="mariadb-account-create" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.756010 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2268cb-de42-4a78-84d1-9ae16eee5535" containerName="mariadb-account-create" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.756589 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.759752 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.759789 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-rrmq9" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.790129 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-8tdkn"] Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.884246 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-db-sync-config-data\") pod \"glance-db-sync-8tdkn\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.884312 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-config-data\") pod \"glance-db-sync-8tdkn\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.884347 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8cfr\" (UniqueName: \"kubernetes.io/projected/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-kube-api-access-b8cfr\") pod \"glance-db-sync-8tdkn\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.986022 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-db-sync-config-data\") pod \"glance-db-sync-8tdkn\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.986096 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-config-data\") pod \"glance-db-sync-8tdkn\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.986139 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8cfr\" (UniqueName: \"kubernetes.io/projected/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-kube-api-access-b8cfr\") pod \"glance-db-sync-8tdkn\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:30 crc kubenswrapper[4931]: I0131 04:43:30.999386 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-db-sync-config-data\") pod \"glance-db-sync-8tdkn\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:31 crc kubenswrapper[4931]: I0131 04:43:31.001428 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-config-data\") pod \"glance-db-sync-8tdkn\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:31 crc kubenswrapper[4931]: I0131 04:43:31.016866 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8cfr\" (UniqueName: \"kubernetes.io/projected/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-kube-api-access-b8cfr\") pod \"glance-db-sync-8tdkn\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:31 crc kubenswrapper[4931]: I0131 04:43:31.093434 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:31 crc kubenswrapper[4931]: I0131 04:43:31.510554 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-8tdkn"] Jan 31 04:43:31 crc kubenswrapper[4931]: I0131 04:43:31.936648 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-8tdkn" event={"ID":"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41","Type":"ContainerStarted","Data":"c5d5c7dfb1fbf54a33c0a9987818337d352a96f8ef8fc8cf71e3f0b06b3e5efe"} Jan 31 04:43:43 crc kubenswrapper[4931]: I0131 04:43:43.040345 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-8tdkn" event={"ID":"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41","Type":"ContainerStarted","Data":"efe2cf3b47432667022ab2a6937e20421c61b70597aaf688099c7ab50f2cbe70"} Jan 31 04:43:43 crc kubenswrapper[4931]: I0131 04:43:43.062195 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-8tdkn" podStartSLOduration=2.438810958 podStartE2EDuration="13.062171137s" podCreationTimestamp="2026-01-31 04:43:30 +0000 UTC" firstStartedPulling="2026-01-31 04:43:31.517749794 +0000 UTC m=+1170.326978668" lastFinishedPulling="2026-01-31 04:43:42.141109963 +0000 UTC m=+1180.950338847" observedRunningTime="2026-01-31 04:43:43.057032377 +0000 UTC m=+1181.866261251" watchObservedRunningTime="2026-01-31 04:43:43.062171137 +0000 UTC m=+1181.871400051" Jan 31 04:43:49 crc kubenswrapper[4931]: I0131 04:43:49.079739 4931 generic.go:334] "Generic (PLEG): container finished" podID="f3aa2f3a-f268-48f2-94b8-77b9c3f30b41" containerID="efe2cf3b47432667022ab2a6937e20421c61b70597aaf688099c7ab50f2cbe70" exitCode=0 Jan 31 04:43:49 crc kubenswrapper[4931]: I0131 04:43:49.079746 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-8tdkn" event={"ID":"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41","Type":"ContainerDied","Data":"efe2cf3b47432667022ab2a6937e20421c61b70597aaf688099c7ab50f2cbe70"} Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.417853 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.598646 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-db-sync-config-data\") pod \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.598791 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-config-data\") pod \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.598828 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8cfr\" (UniqueName: \"kubernetes.io/projected/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-kube-api-access-b8cfr\") pod \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\" (UID: \"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41\") " Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.604646 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f3aa2f3a-f268-48f2-94b8-77b9c3f30b41" (UID: "f3aa2f3a-f268-48f2-94b8-77b9c3f30b41"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.606050 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-kube-api-access-b8cfr" (OuterVolumeSpecName: "kube-api-access-b8cfr") pod "f3aa2f3a-f268-48f2-94b8-77b9c3f30b41" (UID: "f3aa2f3a-f268-48f2-94b8-77b9c3f30b41"). InnerVolumeSpecName "kube-api-access-b8cfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.646069 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-config-data" (OuterVolumeSpecName: "config-data") pod "f3aa2f3a-f268-48f2-94b8-77b9c3f30b41" (UID: "f3aa2f3a-f268-48f2-94b8-77b9c3f30b41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.700745 4931 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.700778 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:50 crc kubenswrapper[4931]: I0131 04:43:50.700787 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8cfr\" (UniqueName: \"kubernetes.io/projected/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41-kube-api-access-b8cfr\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.096478 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-8tdkn" event={"ID":"f3aa2f3a-f268-48f2-94b8-77b9c3f30b41","Type":"ContainerDied","Data":"c5d5c7dfb1fbf54a33c0a9987818337d352a96f8ef8fc8cf71e3f0b06b3e5efe"} Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.096515 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d5c7dfb1fbf54a33c0a9987818337d352a96f8ef8fc8cf71e3f0b06b3e5efe" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.096572 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-8tdkn" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.133026 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.133320 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.133370 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.134151 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6fdd63c2992141cc392f9894235afa4f2697a4d0af5fdfafac1d9c21aba8ff3"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.134238 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://f6fdd63c2992141cc392f9894235afa4f2697a4d0af5fdfafac1d9c21aba8ff3" gracePeriod=600 Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.606916 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:43:51 crc kubenswrapper[4931]: E0131 04:43:51.607594 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3aa2f3a-f268-48f2-94b8-77b9c3f30b41" containerName="glance-db-sync" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.607611 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3aa2f3a-f268-48f2-94b8-77b9c3f30b41" containerName="glance-db-sync" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.607779 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3aa2f3a-f268-48f2-94b8-77b9c3f30b41" containerName="glance-db-sync" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.608686 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.610377 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.610529 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-rrmq9" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.610909 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.621799 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.623269 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.627489 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.638880 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713253 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713304 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-sys\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713325 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713352 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-lib-modules\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713406 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-nvme\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713444 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5cc9\" (UniqueName: \"kubernetes.io/projected/a58bbbba-8c2f-459d-84b9-82e2054e6a62-kube-api-access-z5cc9\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713487 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-logs\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713513 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-run\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713532 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-scripts\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713572 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-dev\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713614 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713639 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-config-data\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713655 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-httpd-run\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.713676 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.814945 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5cc9\" (UniqueName: \"kubernetes.io/projected/a58bbbba-8c2f-459d-84b9-82e2054e6a62-kube-api-access-z5cc9\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815006 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815051 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-logs\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815075 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-logs\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815104 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815461 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-scripts\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815515 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-run\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815538 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-lib-modules\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815568 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-scripts\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815585 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-run\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815692 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-dev\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815709 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-dev\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815740 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-nvme\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815762 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815793 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-config-data\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815809 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7vq\" (UniqueName: \"kubernetes.io/projected/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-kube-api-access-jk7vq\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815810 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-logs\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815829 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-httpd-run\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815876 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815945 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815966 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.815996 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-sys\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816012 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-httpd-run\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816041 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816063 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-config-data\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816106 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-sys\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816139 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-lib-modules\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816175 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-httpd-run\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816186 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-nvme\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816243 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816704 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-sys\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816787 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-lib-modules\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816813 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-run\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816827 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-dev\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816880 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816922 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-nvme\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.816956 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.817002 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.817049 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.823316 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-scripts\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.833354 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-config-data\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.843023 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.861384 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5cc9\" (UniqueName: \"kubernetes.io/projected/a58bbbba-8c2f-459d-84b9-82e2054e6a62-kube-api-access-z5cc9\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.902450 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917567 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-logs\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917622 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917650 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-scripts\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917681 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-lib-modules\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917707 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-run\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917779 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-dev\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917802 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-nvme\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917835 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk7vq\" (UniqueName: \"kubernetes.io/projected/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-kube-api-access-jk7vq\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917887 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917914 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-httpd-run\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917939 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-config-data\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917965 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-sys\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.917997 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.918023 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.918132 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.918580 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-logs\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.918633 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.919151 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-run\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.919198 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-nvme\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.919162 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-dev\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.919224 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-lib-modules\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.919350 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.921363 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-sys\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.921504 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.921815 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-httpd-run\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.925244 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-config-data\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.925586 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.934358 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-scripts\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.953092 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk7vq\" (UniqueName: \"kubernetes.io/projected/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-kube-api-access-jk7vq\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.957230 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:51 crc kubenswrapper[4931]: I0131 04:43:51.983785 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:52 crc kubenswrapper[4931]: I0131 04:43:52.142885 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="f6fdd63c2992141cc392f9894235afa4f2697a4d0af5fdfafac1d9c21aba8ff3" exitCode=0 Jan 31 04:43:52 crc kubenswrapper[4931]: I0131 04:43:52.142981 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"f6fdd63c2992141cc392f9894235afa4f2697a4d0af5fdfafac1d9c21aba8ff3"} Jan 31 04:43:52 crc kubenswrapper[4931]: I0131 04:43:52.143232 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"9190250862ec2bef6098d6e5f251719973f7af0324c479608c8122c95778b27e"} Jan 31 04:43:52 crc kubenswrapper[4931]: I0131 04:43:52.143254 4931 scope.go:117] "RemoveContainer" containerID="1ceca043a424e437ecc5528abaae063cab9d2263bd81f19cc01aa29b5f8717c7" Jan 31 04:43:52 crc kubenswrapper[4931]: I0131 04:43:52.238741 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:52 crc kubenswrapper[4931]: I0131 04:43:52.455361 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:43:52 crc kubenswrapper[4931]: I0131 04:43:52.471212 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:43:52 crc kubenswrapper[4931]: W0131 04:43:52.498658 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3db2bca9_d9c6_407f_bcbe_b93bb9e9ace3.slice/crio-3d6f6c3b2b6380bede6279e37d5aa7399bc34cc0aaa4e13a30ac23c563e99341 WatchSource:0}: Error finding container 3d6f6c3b2b6380bede6279e37d5aa7399bc34cc0aaa4e13a30ac23c563e99341: Status 404 returned error can't find the container with id 3d6f6c3b2b6380bede6279e37d5aa7399bc34cc0aaa4e13a30ac23c563e99341 Jan 31 04:43:52 crc kubenswrapper[4931]: I0131 04:43:52.665684 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.152978 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3","Type":"ContainerStarted","Data":"58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608"} Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.153672 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3","Type":"ContainerStarted","Data":"88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7"} Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.153693 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3","Type":"ContainerStarted","Data":"3d6f6c3b2b6380bede6279e37d5aa7399bc34cc0aaa4e13a30ac23c563e99341"} Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.153137 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerName="glance-httpd" containerID="cri-o://58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608" gracePeriod=30 Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.153070 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerName="glance-log" containerID="cri-o://88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7" gracePeriod=30 Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.155296 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a58bbbba-8c2f-459d-84b9-82e2054e6a62","Type":"ContainerStarted","Data":"531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12"} Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.155329 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a58bbbba-8c2f-459d-84b9-82e2054e6a62","Type":"ContainerStarted","Data":"975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901"} Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.155339 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a58bbbba-8c2f-459d-84b9-82e2054e6a62","Type":"ContainerStarted","Data":"d43e6efec329c30296a05f4ab2fc4eb69762afe92dc97d6452bc04394415550f"} Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.177641 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=2.177623135 podStartE2EDuration="2.177623135s" podCreationTimestamp="2026-01-31 04:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:43:53.17449851 +0000 UTC m=+1191.983727384" watchObservedRunningTime="2026-01-31 04:43:53.177623135 +0000 UTC m=+1191.986852009" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.201007 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.200990603 podStartE2EDuration="2.200990603s" podCreationTimestamp="2026-01-31 04:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:43:53.197986561 +0000 UTC m=+1192.007215435" watchObservedRunningTime="2026-01-31 04:43:53.200990603 +0000 UTC m=+1192.010219477" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.554899 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.663863 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-httpd-run\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.663912 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-nvme\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.663964 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.663979 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-lib-modules\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664001 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk7vq\" (UniqueName: \"kubernetes.io/projected/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-kube-api-access-jk7vq\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664033 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-logs\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664085 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-sys\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664101 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-dev\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664123 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-config-data\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664141 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-var-locks-brick\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664157 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-scripts\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664174 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-run\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664221 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-iscsi\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.664237 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\" (UID: \"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3\") " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.665006 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.665047 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.665060 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-dev" (OuterVolumeSpecName: "dev") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.665299 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-run" (OuterVolumeSpecName: "run") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.665343 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.665369 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-sys" (OuterVolumeSpecName: "sys") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.665388 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.665407 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.665445 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-logs" (OuterVolumeSpecName: "logs") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.669737 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance-cache") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.677914 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.678045 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-kube-api-access-jk7vq" (OuterVolumeSpecName: "kube-api-access-jk7vq") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "kube-api-access-jk7vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.678486 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-scripts" (OuterVolumeSpecName: "scripts") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.711035 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-config-data" (OuterVolumeSpecName: "config-data") pod "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" (UID: "3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765881 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765916 4931 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765926 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk7vq\" (UniqueName: \"kubernetes.io/projected/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-kube-api-access-jk7vq\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765938 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765946 4931 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-sys\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765954 4931 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-dev\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765963 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765971 4931 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765980 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765989 4931 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.765997 4931 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.766016 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.766025 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.766034 4931 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.779767 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.780763 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.867867 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:53 crc kubenswrapper[4931]: I0131 04:43:53.867897 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.170116 4931 generic.go:334] "Generic (PLEG): container finished" podID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerID="58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608" exitCode=143 Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.170475 4931 generic.go:334] "Generic (PLEG): container finished" podID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerID="88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7" exitCode=143 Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.170281 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3","Type":"ContainerDied","Data":"58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608"} Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.170522 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3","Type":"ContainerDied","Data":"88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7"} Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.170535 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3","Type":"ContainerDied","Data":"3d6f6c3b2b6380bede6279e37d5aa7399bc34cc0aaa4e13a30ac23c563e99341"} Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.170555 4931 scope.go:117] "RemoveContainer" containerID="58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.170373 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.203804 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.210596 4931 scope.go:117] "RemoveContainer" containerID="88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.215066 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.230282 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:43:54 crc kubenswrapper[4931]: E0131 04:43:54.230621 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerName="glance-httpd" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.230644 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerName="glance-httpd" Jan 31 04:43:54 crc kubenswrapper[4931]: E0131 04:43:54.230680 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerName="glance-log" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.230689 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerName="glance-log" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.230872 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerName="glance-log" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.230888 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" containerName="glance-httpd" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.231916 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.241165 4931 scope.go:117] "RemoveContainer" containerID="58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608" Jan 31 04:43:54 crc kubenswrapper[4931]: E0131 04:43:54.245084 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608\": container with ID starting with 58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608 not found: ID does not exist" containerID="58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.245137 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608"} err="failed to get container status \"58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608\": rpc error: code = NotFound desc = could not find container \"58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608\": container with ID starting with 58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608 not found: ID does not exist" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.245162 4931 scope.go:117] "RemoveContainer" containerID="88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.248402 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:43:54 crc kubenswrapper[4931]: E0131 04:43:54.248518 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7\": container with ID starting with 88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7 not found: ID does not exist" containerID="88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.248574 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7"} err="failed to get container status \"88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7\": rpc error: code = NotFound desc = could not find container \"88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7\": container with ID starting with 88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7 not found: ID does not exist" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.248616 4931 scope.go:117] "RemoveContainer" containerID="58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.249207 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608"} err="failed to get container status \"58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608\": rpc error: code = NotFound desc = could not find container \"58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608\": container with ID starting with 58a381209116985bc1ff97755c49856966cb7e63301d6ba22ff3e5253857b608 not found: ID does not exist" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.249238 4931 scope.go:117] "RemoveContainer" containerID="88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.249475 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7"} err="failed to get container status \"88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7\": rpc error: code = NotFound desc = could not find container \"88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7\": container with ID starting with 88bf02a5b0951b3b9d3e29053b3331ef725f5bda52f56d72702517f2415bcac7 not found: ID does not exist" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376219 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376273 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-logs\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376310 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376338 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l427\" (UniqueName: \"kubernetes.io/projected/6e210c4e-37a9-40f4-9091-9b91266ba14e-kube-api-access-4l427\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376368 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376390 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-run\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376443 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376473 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-scripts\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376520 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-lib-modules\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376548 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-config-data\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376571 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-dev\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376609 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-httpd-run\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376634 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-nvme\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.376679 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-sys\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478020 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-httpd-run\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478087 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-nvme\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478145 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-sys\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478168 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478186 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-logs\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478210 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478227 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l427\" (UniqueName: \"kubernetes.io/projected/6e210c4e-37a9-40f4-9091-9b91266ba14e-kube-api-access-4l427\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478243 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478258 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-run\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478300 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478316 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-scripts\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478348 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-lib-modules\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478370 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-config-data\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478392 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-dev\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478463 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-dev\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478860 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478952 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478868 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478885 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-lib-modules\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478900 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-run\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.479069 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-logs\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478960 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478976 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-httpd-run\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478923 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-sys\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.478957 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-nvme\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.500851 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-config-data\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.502741 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-scripts\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.511884 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l427\" (UniqueName: \"kubernetes.io/projected/6e210c4e-37a9-40f4-9091-9b91266ba14e-kube-api-access-4l427\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.513942 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.516322 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:54 crc kubenswrapper[4931]: I0131 04:43:54.564186 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:43:55 crc kubenswrapper[4931]: I0131 04:43:55.033759 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:43:55 crc kubenswrapper[4931]: W0131 04:43:55.038847 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e210c4e_37a9_40f4_9091_9b91266ba14e.slice/crio-1bcf65f7d1e7c8f779f5997462271f64d9da1878e7c35b85fe8e92aae35fc08d WatchSource:0}: Error finding container 1bcf65f7d1e7c8f779f5997462271f64d9da1878e7c35b85fe8e92aae35fc08d: Status 404 returned error can't find the container with id 1bcf65f7d1e7c8f779f5997462271f64d9da1878e7c35b85fe8e92aae35fc08d Jan 31 04:43:55 crc kubenswrapper[4931]: I0131 04:43:55.178052 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"6e210c4e-37a9-40f4-9091-9b91266ba14e","Type":"ContainerStarted","Data":"1bcf65f7d1e7c8f779f5997462271f64d9da1878e7c35b85fe8e92aae35fc08d"} Jan 31 04:43:55 crc kubenswrapper[4931]: I0131 04:43:55.907119 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3" path="/var/lib/kubelet/pods/3db2bca9-d9c6-407f-bcbe-b93bb9e9ace3/volumes" Jan 31 04:43:59 crc kubenswrapper[4931]: I0131 04:43:59.211697 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"6e210c4e-37a9-40f4-9091-9b91266ba14e","Type":"ContainerStarted","Data":"cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f"} Jan 31 04:43:59 crc kubenswrapper[4931]: I0131 04:43:59.212440 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"6e210c4e-37a9-40f4-9091-9b91266ba14e","Type":"ContainerStarted","Data":"863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa"} Jan 31 04:43:59 crc kubenswrapper[4931]: I0131 04:43:59.240006 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=5.23997297 podStartE2EDuration="5.23997297s" podCreationTimestamp="2026-01-31 04:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:43:59.230963914 +0000 UTC m=+1198.040192818" watchObservedRunningTime="2026-01-31 04:43:59.23997297 +0000 UTC m=+1198.049201874" Jan 31 04:44:01 crc kubenswrapper[4931]: I0131 04:44:01.932909 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:01 crc kubenswrapper[4931]: I0131 04:44:01.933311 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:01 crc kubenswrapper[4931]: I0131 04:44:01.967051 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:01 crc kubenswrapper[4931]: I0131 04:44:01.971710 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:02 crc kubenswrapper[4931]: I0131 04:44:02.234779 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:02 crc kubenswrapper[4931]: I0131 04:44:02.235035 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:04 crc kubenswrapper[4931]: I0131 04:44:04.235492 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:04 crc kubenswrapper[4931]: I0131 04:44:04.241365 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:04 crc kubenswrapper[4931]: I0131 04:44:04.564885 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:04 crc kubenswrapper[4931]: I0131 04:44:04.564931 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:04 crc kubenswrapper[4931]: I0131 04:44:04.593041 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:04 crc kubenswrapper[4931]: I0131 04:44:04.615264 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:05 crc kubenswrapper[4931]: I0131 04:44:05.259380 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:05 crc kubenswrapper[4931]: I0131 04:44:05.259436 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:07 crc kubenswrapper[4931]: I0131 04:44:07.179515 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:07 crc kubenswrapper[4931]: I0131 04:44:07.183894 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:07 crc kubenswrapper[4931]: I0131 04:44:07.253845 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:44:07 crc kubenswrapper[4931]: I0131 04:44:07.254125 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerName="glance-log" containerID="cri-o://975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901" gracePeriod=30 Jan 31 04:44:07 crc kubenswrapper[4931]: I0131 04:44:07.254295 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerName="glance-httpd" containerID="cri-o://531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12" gracePeriod=30 Jan 31 04:44:08 crc kubenswrapper[4931]: I0131 04:44:08.298614 4931 generic.go:334] "Generic (PLEG): container finished" podID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerID="975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901" exitCode=143 Jan 31 04:44:08 crc kubenswrapper[4931]: I0131 04:44:08.298833 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a58bbbba-8c2f-459d-84b9-82e2054e6a62","Type":"ContainerDied","Data":"975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901"} Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.018891 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.147482 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-dev\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.147893 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-logs\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.147915 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-run\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.147942 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-config-data\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.147962 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.147984 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-lib-modules\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.148032 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5cc9\" (UniqueName: \"kubernetes.io/projected/a58bbbba-8c2f-459d-84b9-82e2054e6a62-kube-api-access-z5cc9\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.148069 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-sys\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.148086 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-scripts\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.148142 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-httpd-run\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.148164 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-var-locks-brick\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.148244 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-iscsi\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.148295 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-nvme\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.148317 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\" (UID: \"a58bbbba-8c2f-459d-84b9-82e2054e6a62\") " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.149362 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-run" (OuterVolumeSpecName: "run") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.149427 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-dev" (OuterVolumeSpecName: "dev") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.149792 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.149834 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-sys" (OuterVolumeSpecName: "sys") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.149844 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-logs" (OuterVolumeSpecName: "logs") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.150091 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.150141 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.150170 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.150193 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.153981 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance-cache") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.154613 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.155835 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-scripts" (OuterVolumeSpecName: "scripts") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.157619 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a58bbbba-8c2f-459d-84b9-82e2054e6a62-kube-api-access-z5cc9" (OuterVolumeSpecName: "kube-api-access-z5cc9") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "kube-api-access-z5cc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.190825 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-config-data" (OuterVolumeSpecName: "config-data") pod "a58bbbba-8c2f-459d-84b9-82e2054e6a62" (UID: "a58bbbba-8c2f-459d-84b9-82e2054e6a62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250189 4931 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250224 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5cc9\" (UniqueName: \"kubernetes.io/projected/a58bbbba-8c2f-459d-84b9-82e2054e6a62-kube-api-access-z5cc9\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250236 4931 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-sys\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250245 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250253 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250261 4931 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250269 4931 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250276 4931 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250304 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250314 4931 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-dev\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250324 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58bbbba-8c2f-459d-84b9-82e2054e6a62-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250331 4931 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a58bbbba-8c2f-459d-84b9-82e2054e6a62-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250339 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bbbba-8c2f-459d-84b9-82e2054e6a62-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.250353 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.262534 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.262563 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.321382 4931 generic.go:334] "Generic (PLEG): container finished" podID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerID="531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12" exitCode=0 Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.321421 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a58bbbba-8c2f-459d-84b9-82e2054e6a62","Type":"ContainerDied","Data":"531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12"} Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.321440 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.321455 4931 scope.go:117] "RemoveContainer" containerID="531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.321444 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a58bbbba-8c2f-459d-84b9-82e2054e6a62","Type":"ContainerDied","Data":"d43e6efec329c30296a05f4ab2fc4eb69762afe92dc97d6452bc04394415550f"} Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.350179 4931 scope.go:117] "RemoveContainer" containerID="975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.351463 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.351478 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.356778 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.366073 4931 scope.go:117] "RemoveContainer" containerID="531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12" Jan 31 04:44:11 crc kubenswrapper[4931]: E0131 04:44:11.366809 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12\": container with ID starting with 531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12 not found: ID does not exist" containerID="531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.366936 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12"} err="failed to get container status \"531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12\": rpc error: code = NotFound desc = could not find container \"531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12\": container with ID starting with 531f8592d52ddec7d3ba024ad4ea3a25ef3bc8d9e1fc0659726056d0ad1a0c12 not found: ID does not exist" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.367041 4931 scope.go:117] "RemoveContainer" containerID="975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901" Jan 31 04:44:11 crc kubenswrapper[4931]: E0131 04:44:11.367461 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901\": container with ID starting with 975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901 not found: ID does not exist" containerID="975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.367548 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901"} err="failed to get container status \"975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901\": rpc error: code = NotFound desc = could not find container \"975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901\": container with ID starting with 975c7fb498e636c4a63312c58411aad5389bf947a255274eed09f0081f410901 not found: ID does not exist" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.368802 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.388856 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:44:11 crc kubenswrapper[4931]: E0131 04:44:11.389133 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerName="glance-httpd" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.389152 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerName="glance-httpd" Jan 31 04:44:11 crc kubenswrapper[4931]: E0131 04:44:11.389169 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerName="glance-log" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.389177 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerName="glance-log" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.389306 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerName="glance-httpd" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.389324 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" containerName="glance-log" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.389967 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.415633 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.554198 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.554834 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-config-data\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.555040 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-logs\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.555270 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h6c8\" (UniqueName: \"kubernetes.io/projected/5312afee-c147-4fb7-887d-7659df3518e9-kube-api-access-2h6c8\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.555500 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-run\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.555702 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.555953 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-nvme\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.556180 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-scripts\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.556395 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-sys\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.556616 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-lib-modules\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.556887 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.557125 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-dev\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.557313 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-httpd-run\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.557490 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.658885 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-scripts\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.658940 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-sys\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.658972 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-lib-modules\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659028 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659059 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-dev\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659095 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-httpd-run\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659122 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659141 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659161 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-config-data\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659185 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-logs\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659213 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h6c8\" (UniqueName: \"kubernetes.io/projected/5312afee-c147-4fb7-887d-7659df3518e9-kube-api-access-2h6c8\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659240 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659321 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-run\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659343 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-nvme\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659457 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-nvme\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659824 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659909 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-sys\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.659949 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-lib-modules\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.660031 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.661282 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-logs\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.661364 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.663181 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-scripts\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.663610 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.664195 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-dev\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.664796 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-run\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.665942 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-config-data\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.670326 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-httpd-run\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.690176 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.690523 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.693441 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h6c8\" (UniqueName: \"kubernetes.io/projected/5312afee-c147-4fb7-887d-7659df3518e9-kube-api-access-2h6c8\") pod \"glance-default-single-0\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.708236 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:11 crc kubenswrapper[4931]: I0131 04:44:11.909599 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a58bbbba-8c2f-459d-84b9-82e2054e6a62" path="/var/lib/kubelet/pods/a58bbbba-8c2f-459d-84b9-82e2054e6a62/volumes" Jan 31 04:44:12 crc kubenswrapper[4931]: I0131 04:44:12.144536 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:44:12 crc kubenswrapper[4931]: W0131 04:44:12.146959 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5312afee_c147_4fb7_887d_7659df3518e9.slice/crio-b7ec9a9f85285b537447f9f0cf7bc8a21e13097e7ccf402a8cf30ca6bca965d6 WatchSource:0}: Error finding container b7ec9a9f85285b537447f9f0cf7bc8a21e13097e7ccf402a8cf30ca6bca965d6: Status 404 returned error can't find the container with id b7ec9a9f85285b537447f9f0cf7bc8a21e13097e7ccf402a8cf30ca6bca965d6 Jan 31 04:44:12 crc kubenswrapper[4931]: I0131 04:44:12.330312 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5312afee-c147-4fb7-887d-7659df3518e9","Type":"ContainerStarted","Data":"08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1"} Jan 31 04:44:12 crc kubenswrapper[4931]: I0131 04:44:12.330361 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5312afee-c147-4fb7-887d-7659df3518e9","Type":"ContainerStarted","Data":"b7ec9a9f85285b537447f9f0cf7bc8a21e13097e7ccf402a8cf30ca6bca965d6"} Jan 31 04:44:13 crc kubenswrapper[4931]: I0131 04:44:13.345161 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5312afee-c147-4fb7-887d-7659df3518e9","Type":"ContainerStarted","Data":"975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87"} Jan 31 04:44:13 crc kubenswrapper[4931]: I0131 04:44:13.392586 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.392559314 podStartE2EDuration="2.392559314s" podCreationTimestamp="2026-01-31 04:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:44:13.381086491 +0000 UTC m=+1212.190315365" watchObservedRunningTime="2026-01-31 04:44:13.392559314 +0000 UTC m=+1212.201788208" Jan 31 04:44:21 crc kubenswrapper[4931]: I0131 04:44:21.708769 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:21 crc kubenswrapper[4931]: I0131 04:44:21.710906 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:21 crc kubenswrapper[4931]: I0131 04:44:21.733440 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:21 crc kubenswrapper[4931]: I0131 04:44:21.754498 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:22 crc kubenswrapper[4931]: I0131 04:44:22.402756 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:22 crc kubenswrapper[4931]: I0131 04:44:22.402817 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:24 crc kubenswrapper[4931]: I0131 04:44:24.398117 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:24 crc kubenswrapper[4931]: I0131 04:44:24.399521 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.273079 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-8tdkn"] Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.280644 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-8tdkn"] Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.351770 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance4b63-account-delete-4cxb4"] Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.352735 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.371975 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.372215 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-log" containerID="cri-o://08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1" gracePeriod=30 Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.372347 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-httpd" containerID="cri-o://975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87" gracePeriod=30 Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.393488 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.394042 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerName="glance-log" containerID="cri-o://863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa" gracePeriod=30 Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.394508 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerName="glance-httpd" containerID="cri-o://cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f" gracePeriod=30 Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.409303 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance4b63-account-delete-4cxb4"] Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.469973 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.470168 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstackclient" podUID="c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" containerName="openstackclient" containerID="cri-o://3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b" gracePeriod=30 Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.526280 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2957\" (UniqueName: \"kubernetes.io/projected/7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261-kube-api-access-j2957\") pod \"glance4b63-account-delete-4cxb4\" (UID: \"7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261\") " pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.559159 4931 generic.go:334] "Generic (PLEG): container finished" podID="5312afee-c147-4fb7-887d-7659df3518e9" containerID="08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1" exitCode=143 Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.559227 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5312afee-c147-4fb7-887d-7659df3518e9","Type":"ContainerDied","Data":"08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1"} Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.560768 4931 generic.go:334] "Generic (PLEG): container finished" podID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerID="863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa" exitCode=143 Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.560796 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"6e210c4e-37a9-40f4-9091-9b91266ba14e","Type":"ContainerDied","Data":"863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa"} Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.629359 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2957\" (UniqueName: \"kubernetes.io/projected/7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261-kube-api-access-j2957\") pod \"glance4b63-account-delete-4cxb4\" (UID: \"7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261\") " pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.660292 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2957\" (UniqueName: \"kubernetes.io/projected/7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261-kube-api-access-j2957\") pod \"glance4b63-account-delete-4cxb4\" (UID: \"7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261\") " pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.670173 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" Jan 31 04:44:40 crc kubenswrapper[4931]: I0131 04:44:40.851798 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.036970 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-scripts\") pod \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.037066 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config\") pod \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.037143 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config-secret\") pod \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.037187 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvp42\" (UniqueName: \"kubernetes.io/projected/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-kube-api-access-lvp42\") pod \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\" (UID: \"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312\") " Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.039369 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-scripts" (OuterVolumeSpecName: "openstack-scripts") pod "c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" (UID: "c3c38ee8-fa37-45d3-ac6b-b2e055f0a312"). InnerVolumeSpecName "openstack-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.043377 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-kube-api-access-lvp42" (OuterVolumeSpecName: "kube-api-access-lvp42") pod "c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" (UID: "c3c38ee8-fa37-45d3-ac6b-b2e055f0a312"). InnerVolumeSpecName "kube-api-access-lvp42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.056506 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" (UID: "c3c38ee8-fa37-45d3-ac6b-b2e055f0a312"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.058066 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" (UID: "c3c38ee8-fa37-45d3-ac6b-b2e055f0a312"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.136059 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance4b63-account-delete-4cxb4"] Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.138955 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.138996 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.139009 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvp42\" (UniqueName: \"kubernetes.io/projected/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-kube-api-access-lvp42\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.139021 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312-openstack-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.567972 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" containerID="3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b" exitCode=143 Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.568038 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.568024 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312","Type":"ContainerDied","Data":"3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b"} Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.568492 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"c3c38ee8-fa37-45d3-ac6b-b2e055f0a312","Type":"ContainerDied","Data":"626ff6c403e1ea1f10692c5a82946fb3ada104a3422cd34c3c79dd993674aa33"} Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.568525 4931 scope.go:117] "RemoveContainer" containerID="3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.571870 4931 generic.go:334] "Generic (PLEG): container finished" podID="7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261" containerID="e4d9ee8593730318929f21d5d62d7f648f5045f95409da12f6ccbbe6cd4d200b" exitCode=0 Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.571913 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" event={"ID":"7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261","Type":"ContainerDied","Data":"e4d9ee8593730318929f21d5d62d7f648f5045f95409da12f6ccbbe6cd4d200b"} Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.571944 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" event={"ID":"7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261","Type":"ContainerStarted","Data":"8335ea564ed887953e69ae1c838b1f3d18b286cfc3815ae4ae42d438a679406e"} Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.601914 4931 scope.go:117] "RemoveContainer" containerID="3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b" Jan 31 04:44:41 crc kubenswrapper[4931]: E0131 04:44:41.603152 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b\": container with ID starting with 3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b not found: ID does not exist" containerID="3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.603274 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b"} err="failed to get container status \"3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b\": rpc error: code = NotFound desc = could not find container \"3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b\": container with ID starting with 3a592b8d9aa8d48d230df62675e7f380b28d5192a082f39ceda919e964d45e8b not found: ID does not exist" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.610485 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.617337 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.908271 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" path="/var/lib/kubelet/pods/c3c38ee8-fa37-45d3-ac6b-b2e055f0a312/volumes" Jan 31 04:44:41 crc kubenswrapper[4931]: I0131 04:44:41.910416 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3aa2f3a-f268-48f2-94b8-77b9c3f30b41" path="/var/lib/kubelet/pods/f3aa2f3a-f268-48f2-94b8-77b9c3f30b41/volumes" Jan 31 04:44:42 crc kubenswrapper[4931]: I0131 04:44:42.860297 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" Jan 31 04:44:42 crc kubenswrapper[4931]: I0131 04:44:42.966165 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2957\" (UniqueName: \"kubernetes.io/projected/7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261-kube-api-access-j2957\") pod \"7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261\" (UID: \"7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261\") " Jan 31 04:44:42 crc kubenswrapper[4931]: I0131 04:44:42.974697 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261-kube-api-access-j2957" (OuterVolumeSpecName: "kube-api-access-j2957") pod "7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261" (UID: "7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261"). InnerVolumeSpecName "kube-api-access-j2957". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.067746 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2957\" (UniqueName: \"kubernetes.io/projected/7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261-kube-api-access-j2957\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.528403 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.102:9292/healthcheck\": read tcp 10.217.0.2:59672->10.217.0.102:9292: read: connection reset by peer" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.528446 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.102:9292/healthcheck\": read tcp 10.217.0.2:59660->10.217.0.102:9292: read: connection reset by peer" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.594319 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" event={"ID":"7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261","Type":"ContainerDied","Data":"8335ea564ed887953e69ae1c838b1f3d18b286cfc3815ae4ae42d438a679406e"} Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.594391 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8335ea564ed887953e69ae1c838b1f3d18b286cfc3815ae4ae42d438a679406e" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.594497 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance4b63-account-delete-4cxb4" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.905104 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.978163 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.991957 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-scripts\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.991990 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992028 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-logs\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992073 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-var-locks-brick\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992096 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h6c8\" (UniqueName: \"kubernetes.io/projected/5312afee-c147-4fb7-887d-7659df3518e9-kube-api-access-2h6c8\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992116 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-run\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992164 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-sys\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992191 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-httpd-run\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992205 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-lib-modules\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992222 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-config-data\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992237 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-nvme\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992263 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-dev\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992290 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-iscsi\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992308 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"5312afee-c147-4fb7-887d-7659df3518e9\" (UID: \"5312afee-c147-4fb7-887d-7659df3518e9\") " Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.992788 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.993197 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.993265 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-sys" (OuterVolumeSpecName: "sys") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.993289 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.993308 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-run" (OuterVolumeSpecName: "run") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.993404 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-logs" (OuterVolumeSpecName: "logs") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.993440 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-dev" (OuterVolumeSpecName: "dev") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.993539 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.998306 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:44:43 crc kubenswrapper[4931]: I0131 04:44:43.998504 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance-cache") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.000362 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-scripts" (OuterVolumeSpecName: "scripts") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.012950 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.013409 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5312afee-c147-4fb7-887d-7659df3518e9-kube-api-access-2h6c8" (OuterVolumeSpecName: "kube-api-access-2h6c8") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "kube-api-access-2h6c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.040100 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-config-data" (OuterVolumeSpecName: "config-data") pod "5312afee-c147-4fb7-887d-7659df3518e9" (UID: "5312afee-c147-4fb7-887d-7659df3518e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093546 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093654 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-var-locks-brick\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093669 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-iscsi\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093685 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093714 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-logs\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093742 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-scripts\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093755 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-dev\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093778 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-config-data\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093798 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l427\" (UniqueName: \"kubernetes.io/projected/6e210c4e-37a9-40f4-9091-9b91266ba14e-kube-api-access-4l427\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093875 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-run\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093890 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-nvme\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093879 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093913 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-sys\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093932 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-httpd-run\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093960 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-lib-modules\") pod \"6e210c4e-37a9-40f4-9091-9b91266ba14e\" (UID: \"6e210c4e-37a9-40f4-9091-9b91266ba14e\") " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.093936 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.094125 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-sys" (OuterVolumeSpecName: "sys") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.094145 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-run" (OuterVolumeSpecName: "run") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.094160 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.094204 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-dev" (OuterVolumeSpecName: "dev") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.094513 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.094707 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-logs" (OuterVolumeSpecName: "logs") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.094882 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095503 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095529 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095541 4931 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095552 4931 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095564 4931 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095575 4931 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-sys\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095586 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095598 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h6c8\" (UniqueName: \"kubernetes.io/projected/5312afee-c147-4fb7-887d-7659df3518e9-kube-api-access-2h6c8\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095615 4931 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095624 4931 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095633 4931 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-sys\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095643 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5312afee-c147-4fb7-887d-7659df3518e9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095652 4931 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095663 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095672 4931 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095682 4931 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-dev\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095692 4931 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095701 4931 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095710 4931 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5312afee-c147-4fb7-887d-7659df3518e9-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095752 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095764 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e210c4e-37a9-40f4-9091-9b91266ba14e-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095774 4931 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e210c4e-37a9-40f4-9091-9b91266ba14e-dev\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.095784 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5312afee-c147-4fb7-887d-7659df3518e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.096210 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance-cache") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.096629 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.097081 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e210c4e-37a9-40f4-9091-9b91266ba14e-kube-api-access-4l427" (OuterVolumeSpecName: "kube-api-access-4l427") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "kube-api-access-4l427". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.106810 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-scripts" (OuterVolumeSpecName: "scripts") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.108441 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.109891 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.124106 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-config-data" (OuterVolumeSpecName: "config-data") pod "6e210c4e-37a9-40f4-9091-9b91266ba14e" (UID: "6e210c4e-37a9-40f4-9091-9b91266ba14e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.196580 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.196614 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.196626 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.196637 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.196646 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e210c4e-37a9-40f4-9091-9b91266ba14e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.196657 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l427\" (UniqueName: \"kubernetes.io/projected/6e210c4e-37a9-40f4-9091-9b91266ba14e-kube-api-access-4l427\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.196672 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.209342 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.210314 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.298231 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.298265 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.605530 4931 generic.go:334] "Generic (PLEG): container finished" podID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerID="cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f" exitCode=0 Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.605639 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.605665 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"6e210c4e-37a9-40f4-9091-9b91266ba14e","Type":"ContainerDied","Data":"cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f"} Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.606181 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"6e210c4e-37a9-40f4-9091-9b91266ba14e","Type":"ContainerDied","Data":"1bcf65f7d1e7c8f779f5997462271f64d9da1878e7c35b85fe8e92aae35fc08d"} Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.606213 4931 scope.go:117] "RemoveContainer" containerID="cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.609041 4931 generic.go:334] "Generic (PLEG): container finished" podID="5312afee-c147-4fb7-887d-7659df3518e9" containerID="975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87" exitCode=0 Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.609093 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5312afee-c147-4fb7-887d-7659df3518e9","Type":"ContainerDied","Data":"975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87"} Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.609129 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5312afee-c147-4fb7-887d-7659df3518e9","Type":"ContainerDied","Data":"b7ec9a9f85285b537447f9f0cf7bc8a21e13097e7ccf402a8cf30ca6bca965d6"} Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.609136 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.636865 4931 scope.go:117] "RemoveContainer" containerID="863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.658198 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.667067 4931 scope.go:117] "RemoveContainer" containerID="cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f" Jan 31 04:44:44 crc kubenswrapper[4931]: E0131 04:44:44.667478 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f\": container with ID starting with cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f not found: ID does not exist" containerID="cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.667510 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f"} err="failed to get container status \"cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f\": rpc error: code = NotFound desc = could not find container \"cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f\": container with ID starting with cdeb72064fc84b188e77fd7cb03704d986954d516bf7ecab6429f1cc98d3794f not found: ID does not exist" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.667532 4931 scope.go:117] "RemoveContainer" containerID="863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa" Jan 31 04:44:44 crc kubenswrapper[4931]: E0131 04:44:44.668022 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa\": container with ID starting with 863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa not found: ID does not exist" containerID="863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.668070 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa"} err="failed to get container status \"863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa\": rpc error: code = NotFound desc = could not find container \"863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa\": container with ID starting with 863e8a735fe38971b78a62f35ee3e9b1834822ee8a5f24131c11ec535f22caaa not found: ID does not exist" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.668104 4931 scope.go:117] "RemoveContainer" containerID="975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.670070 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.677559 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.684659 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.688430 4931 scope.go:117] "RemoveContainer" containerID="08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.720034 4931 scope.go:117] "RemoveContainer" containerID="975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87" Jan 31 04:44:44 crc kubenswrapper[4931]: E0131 04:44:44.720448 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87\": container with ID starting with 975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87 not found: ID does not exist" containerID="975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.720491 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87"} err="failed to get container status \"975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87\": rpc error: code = NotFound desc = could not find container \"975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87\": container with ID starting with 975cdd559d98366e43bf4deb010ccc5d654aa32d3127e9ffa06fbf820a6c1a87 not found: ID does not exist" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.720524 4931 scope.go:117] "RemoveContainer" containerID="08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1" Jan 31 04:44:44 crc kubenswrapper[4931]: E0131 04:44:44.720994 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1\": container with ID starting with 08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1 not found: ID does not exist" containerID="08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1" Jan 31 04:44:44 crc kubenswrapper[4931]: I0131 04:44:44.721016 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1"} err="failed to get container status \"08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1\": rpc error: code = NotFound desc = could not find container \"08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1\": container with ID starting with 08c8ae93301c8126f496abe8e821984bba7fc342fc21f1cbf717d9587b48e3b1 not found: ID does not exist" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.352674 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-lmht8"] Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.361588 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-lmht8"] Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.377642 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-4b63-account-create-kffdx"] Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.387184 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance4b63-account-delete-4cxb4"] Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.395539 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-4b63-account-create-kffdx"] Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.404365 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance4b63-account-delete-4cxb4"] Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.504455 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-46kjl"] Jan 31 04:44:45 crc kubenswrapper[4931]: E0131 04:44:45.504858 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" containerName="openstackclient" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.504881 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" containerName="openstackclient" Jan 31 04:44:45 crc kubenswrapper[4931]: E0131 04:44:45.504898 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-log" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.504906 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-log" Jan 31 04:44:45 crc kubenswrapper[4931]: E0131 04:44:45.504918 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261" containerName="mariadb-account-delete" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.504926 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261" containerName="mariadb-account-delete" Jan 31 04:44:45 crc kubenswrapper[4931]: E0131 04:44:45.504947 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerName="glance-log" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.504954 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerName="glance-log" Jan 31 04:44:45 crc kubenswrapper[4931]: E0131 04:44:45.504967 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerName="glance-httpd" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.504974 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerName="glance-httpd" Jan 31 04:44:45 crc kubenswrapper[4931]: E0131 04:44:45.504986 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-httpd" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.504993 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-httpd" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.505149 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerName="glance-log" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.505166 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261" containerName="mariadb-account-delete" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.505180 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-log" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.505192 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3c38ee8-fa37-45d3-ac6b-b2e055f0a312" containerName="openstackclient" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.505202 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e210c4e-37a9-40f4-9091-9b91266ba14e" containerName="glance-httpd" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.505213 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312afee-c147-4fb7-887d-7659df3518e9" containerName="glance-httpd" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.505801 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-46kjl" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.515107 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-46kjl"] Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.616516 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg5cz\" (UniqueName: \"kubernetes.io/projected/dad5a09c-7878-4f94-ad3f-fc3ee9253186-kube-api-access-sg5cz\") pod \"glance-db-create-46kjl\" (UID: \"dad5a09c-7878-4f94-ad3f-fc3ee9253186\") " pod="glance-kuttl-tests/glance-db-create-46kjl" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.718397 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg5cz\" (UniqueName: \"kubernetes.io/projected/dad5a09c-7878-4f94-ad3f-fc3ee9253186-kube-api-access-sg5cz\") pod \"glance-db-create-46kjl\" (UID: \"dad5a09c-7878-4f94-ad3f-fc3ee9253186\") " pod="glance-kuttl-tests/glance-db-create-46kjl" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.738205 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg5cz\" (UniqueName: \"kubernetes.io/projected/dad5a09c-7878-4f94-ad3f-fc3ee9253186-kube-api-access-sg5cz\") pod \"glance-db-create-46kjl\" (UID: \"dad5a09c-7878-4f94-ad3f-fc3ee9253186\") " pod="glance-kuttl-tests/glance-db-create-46kjl" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.820322 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-46kjl" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.924949 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d97ba54-c088-4920-b129-4e932187d239" path="/var/lib/kubelet/pods/2d97ba54-c088-4920-b129-4e932187d239/volumes" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.926422 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5312afee-c147-4fb7-887d-7659df3518e9" path="/var/lib/kubelet/pods/5312afee-c147-4fb7-887d-7659df3518e9/volumes" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.927273 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e210c4e-37a9-40f4-9091-9b91266ba14e" path="/var/lib/kubelet/pods/6e210c4e-37a9-40f4-9091-9b91266ba14e/volumes" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.928650 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2268cb-de42-4a78-84d1-9ae16eee5535" path="/var/lib/kubelet/pods/7b2268cb-de42-4a78-84d1-9ae16eee5535/volumes" Jan 31 04:44:45 crc kubenswrapper[4931]: I0131 04:44:45.929286 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261" path="/var/lib/kubelet/pods/7e9d7d45-cee5-4f4a-a63c-6eae5b6e1261/volumes" Jan 31 04:44:46 crc kubenswrapper[4931]: I0131 04:44:46.278919 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-46kjl"] Jan 31 04:44:46 crc kubenswrapper[4931]: I0131 04:44:46.624896 4931 generic.go:334] "Generic (PLEG): container finished" podID="dad5a09c-7878-4f94-ad3f-fc3ee9253186" containerID="be01478f3b9f61c49a7dae1bf4f533f855667a7dc402a6747f221ccebb23301b" exitCode=0 Jan 31 04:44:46 crc kubenswrapper[4931]: I0131 04:44:46.624957 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-46kjl" event={"ID":"dad5a09c-7878-4f94-ad3f-fc3ee9253186","Type":"ContainerDied","Data":"be01478f3b9f61c49a7dae1bf4f533f855667a7dc402a6747f221ccebb23301b"} Jan 31 04:44:46 crc kubenswrapper[4931]: I0131 04:44:46.625234 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-46kjl" event={"ID":"dad5a09c-7878-4f94-ad3f-fc3ee9253186","Type":"ContainerStarted","Data":"14f7b763c767bf509894f030de722c154dbc563cdf4927c819b43edac555c97a"} Jan 31 04:44:47 crc kubenswrapper[4931]: I0131 04:44:47.959214 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-46kjl" Jan 31 04:44:48 crc kubenswrapper[4931]: I0131 04:44:48.049705 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg5cz\" (UniqueName: \"kubernetes.io/projected/dad5a09c-7878-4f94-ad3f-fc3ee9253186-kube-api-access-sg5cz\") pod \"dad5a09c-7878-4f94-ad3f-fc3ee9253186\" (UID: \"dad5a09c-7878-4f94-ad3f-fc3ee9253186\") " Jan 31 04:44:48 crc kubenswrapper[4931]: I0131 04:44:48.055871 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad5a09c-7878-4f94-ad3f-fc3ee9253186-kube-api-access-sg5cz" (OuterVolumeSpecName: "kube-api-access-sg5cz") pod "dad5a09c-7878-4f94-ad3f-fc3ee9253186" (UID: "dad5a09c-7878-4f94-ad3f-fc3ee9253186"). InnerVolumeSpecName "kube-api-access-sg5cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:48 crc kubenswrapper[4931]: I0131 04:44:48.152074 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg5cz\" (UniqueName: \"kubernetes.io/projected/dad5a09c-7878-4f94-ad3f-fc3ee9253186-kube-api-access-sg5cz\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:48 crc kubenswrapper[4931]: I0131 04:44:48.645214 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-46kjl" event={"ID":"dad5a09c-7878-4f94-ad3f-fc3ee9253186","Type":"ContainerDied","Data":"14f7b763c767bf509894f030de722c154dbc563cdf4927c819b43edac555c97a"} Jan 31 04:44:48 crc kubenswrapper[4931]: I0131 04:44:48.645258 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-46kjl" Jan 31 04:44:48 crc kubenswrapper[4931]: I0131 04:44:48.645273 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14f7b763c767bf509894f030de722c154dbc563cdf4927c819b43edac555c97a" Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.533346 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-08d1-account-create-hb5c6"] Jan 31 04:44:55 crc kubenswrapper[4931]: E0131 04:44:55.534183 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad5a09c-7878-4f94-ad3f-fc3ee9253186" containerName="mariadb-database-create" Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.534198 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad5a09c-7878-4f94-ad3f-fc3ee9253186" containerName="mariadb-database-create" Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.534367 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad5a09c-7878-4f94-ad3f-fc3ee9253186" containerName="mariadb-database-create" Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.534904 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.538928 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.548444 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-08d1-account-create-hb5c6"] Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.654749 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj9n9\" (UniqueName: \"kubernetes.io/projected/cdcc69ea-531f-4691-9766-18b1cf9e8035-kube-api-access-pj9n9\") pod \"glance-08d1-account-create-hb5c6\" (UID: \"cdcc69ea-531f-4691-9766-18b1cf9e8035\") " pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.756672 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj9n9\" (UniqueName: \"kubernetes.io/projected/cdcc69ea-531f-4691-9766-18b1cf9e8035-kube-api-access-pj9n9\") pod \"glance-08d1-account-create-hb5c6\" (UID: \"cdcc69ea-531f-4691-9766-18b1cf9e8035\") " pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.782246 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj9n9\" (UniqueName: \"kubernetes.io/projected/cdcc69ea-531f-4691-9766-18b1cf9e8035-kube-api-access-pj9n9\") pod \"glance-08d1-account-create-hb5c6\" (UID: \"cdcc69ea-531f-4691-9766-18b1cf9e8035\") " pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" Jan 31 04:44:55 crc kubenswrapper[4931]: I0131 04:44:55.859155 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" Jan 31 04:44:56 crc kubenswrapper[4931]: I0131 04:44:56.386532 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-08d1-account-create-hb5c6"] Jan 31 04:44:56 crc kubenswrapper[4931]: W0131 04:44:56.387536 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdcc69ea_531f_4691_9766_18b1cf9e8035.slice/crio-5ab65d06ae0760dc875366a93fcd27df20a1567d8d32503677079e7d7e13e7f8 WatchSource:0}: Error finding container 5ab65d06ae0760dc875366a93fcd27df20a1567d8d32503677079e7d7e13e7f8: Status 404 returned error can't find the container with id 5ab65d06ae0760dc875366a93fcd27df20a1567d8d32503677079e7d7e13e7f8 Jan 31 04:44:56 crc kubenswrapper[4931]: I0131 04:44:56.713811 4931 generic.go:334] "Generic (PLEG): container finished" podID="cdcc69ea-531f-4691-9766-18b1cf9e8035" containerID="c425c7fbddde1c3f4899c9276e2b00c0fb2f34d0719d0c79ced412cd89793d23" exitCode=0 Jan 31 04:44:56 crc kubenswrapper[4931]: I0131 04:44:56.713970 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" event={"ID":"cdcc69ea-531f-4691-9766-18b1cf9e8035","Type":"ContainerDied","Data":"c425c7fbddde1c3f4899c9276e2b00c0fb2f34d0719d0c79ced412cd89793d23"} Jan 31 04:44:56 crc kubenswrapper[4931]: I0131 04:44:56.714426 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" event={"ID":"cdcc69ea-531f-4691-9766-18b1cf9e8035","Type":"ContainerStarted","Data":"5ab65d06ae0760dc875366a93fcd27df20a1567d8d32503677079e7d7e13e7f8"} Jan 31 04:44:58 crc kubenswrapper[4931]: I0131 04:44:58.025613 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" Jan 31 04:44:58 crc kubenswrapper[4931]: I0131 04:44:58.195648 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj9n9\" (UniqueName: \"kubernetes.io/projected/cdcc69ea-531f-4691-9766-18b1cf9e8035-kube-api-access-pj9n9\") pod \"cdcc69ea-531f-4691-9766-18b1cf9e8035\" (UID: \"cdcc69ea-531f-4691-9766-18b1cf9e8035\") " Jan 31 04:44:58 crc kubenswrapper[4931]: I0131 04:44:58.201334 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcc69ea-531f-4691-9766-18b1cf9e8035-kube-api-access-pj9n9" (OuterVolumeSpecName: "kube-api-access-pj9n9") pod "cdcc69ea-531f-4691-9766-18b1cf9e8035" (UID: "cdcc69ea-531f-4691-9766-18b1cf9e8035"). InnerVolumeSpecName "kube-api-access-pj9n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:58 crc kubenswrapper[4931]: I0131 04:44:58.297819 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj9n9\" (UniqueName: \"kubernetes.io/projected/cdcc69ea-531f-4691-9766-18b1cf9e8035-kube-api-access-pj9n9\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:58 crc kubenswrapper[4931]: I0131 04:44:58.730941 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" event={"ID":"cdcc69ea-531f-4691-9766-18b1cf9e8035","Type":"ContainerDied","Data":"5ab65d06ae0760dc875366a93fcd27df20a1567d8d32503677079e7d7e13e7f8"} Jan 31 04:44:58 crc kubenswrapper[4931]: I0131 04:44:58.730997 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab65d06ae0760dc875366a93fcd27df20a1567d8d32503677079e7d7e13e7f8" Jan 31 04:44:58 crc kubenswrapper[4931]: I0131 04:44:58.731092 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-08d1-account-create-hb5c6" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.140363 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8"] Jan 31 04:45:00 crc kubenswrapper[4931]: E0131 04:45:00.141306 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcc69ea-531f-4691-9766-18b1cf9e8035" containerName="mariadb-account-create" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.141331 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcc69ea-531f-4691-9766-18b1cf9e8035" containerName="mariadb-account-create" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.141540 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcc69ea-531f-4691-9766-18b1cf9e8035" containerName="mariadb-account-create" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.142247 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.144574 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.144952 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.156773 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8"] Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.224282 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8gs\" (UniqueName: \"kubernetes.io/projected/9a657e8f-334e-42c0-b96c-85ff2e5e4638-kube-api-access-qw8gs\") pod \"collect-profiles-29497245-75rf8\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.224337 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a657e8f-334e-42c0-b96c-85ff2e5e4638-config-volume\") pod \"collect-profiles-29497245-75rf8\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.224385 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a657e8f-334e-42c0-b96c-85ff2e5e4638-secret-volume\") pod \"collect-profiles-29497245-75rf8\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.325809 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8gs\" (UniqueName: \"kubernetes.io/projected/9a657e8f-334e-42c0-b96c-85ff2e5e4638-kube-api-access-qw8gs\") pod \"collect-profiles-29497245-75rf8\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.326420 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a657e8f-334e-42c0-b96c-85ff2e5e4638-config-volume\") pod \"collect-profiles-29497245-75rf8\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.326594 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a657e8f-334e-42c0-b96c-85ff2e5e4638-secret-volume\") pod \"collect-profiles-29497245-75rf8\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.327447 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a657e8f-334e-42c0-b96c-85ff2e5e4638-config-volume\") pod \"collect-profiles-29497245-75rf8\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.334331 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a657e8f-334e-42c0-b96c-85ff2e5e4638-secret-volume\") pod \"collect-profiles-29497245-75rf8\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.346587 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8gs\" (UniqueName: \"kubernetes.io/projected/9a657e8f-334e-42c0-b96c-85ff2e5e4638-kube-api-access-qw8gs\") pod \"collect-profiles-29497245-75rf8\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.462187 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.681265 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-r55xt"] Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.683562 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.687929 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.688251 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-xbrvx" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.688848 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.704776 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-r55xt"] Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.835400 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tdgm\" (UniqueName: \"kubernetes.io/projected/7f84f888-9f57-417f-b1d1-1464af1b459c-kube-api-access-5tdgm\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.835470 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-config-data\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.835550 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-combined-ca-bundle\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.835579 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-db-sync-config-data\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.901187 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8"] Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.937383 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-db-sync-config-data\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.937465 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tdgm\" (UniqueName: \"kubernetes.io/projected/7f84f888-9f57-417f-b1d1-1464af1b459c-kube-api-access-5tdgm\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.937539 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-config-data\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.937638 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-combined-ca-bundle\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.945015 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-combined-ca-bundle\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.945025 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-config-data\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.957201 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-db-sync-config-data\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:00 crc kubenswrapper[4931]: I0131 04:45:00.968294 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tdgm\" (UniqueName: \"kubernetes.io/projected/7f84f888-9f57-417f-b1d1-1464af1b459c-kube-api-access-5tdgm\") pod \"glance-db-sync-r55xt\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:01 crc kubenswrapper[4931]: I0131 04:45:01.008531 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:01 crc kubenswrapper[4931]: I0131 04:45:01.246194 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-r55xt"] Jan 31 04:45:01 crc kubenswrapper[4931]: W0131 04:45:01.260090 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f84f888_9f57_417f_b1d1_1464af1b459c.slice/crio-b7167b370e5f55be3416d5a60230c9326e03861e13250e8737062c77bbbc9e2e WatchSource:0}: Error finding container b7167b370e5f55be3416d5a60230c9326e03861e13250e8737062c77bbbc9e2e: Status 404 returned error can't find the container with id b7167b370e5f55be3416d5a60230c9326e03861e13250e8737062c77bbbc9e2e Jan 31 04:45:01 crc kubenswrapper[4931]: I0131 04:45:01.754037 4931 generic.go:334] "Generic (PLEG): container finished" podID="9a657e8f-334e-42c0-b96c-85ff2e5e4638" containerID="07356ec30e134aa79aaa0738316f1e418d7fdd3e41440b15acf4e15bc90a9098" exitCode=0 Jan 31 04:45:01 crc kubenswrapper[4931]: I0131 04:45:01.754080 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" event={"ID":"9a657e8f-334e-42c0-b96c-85ff2e5e4638","Type":"ContainerDied","Data":"07356ec30e134aa79aaa0738316f1e418d7fdd3e41440b15acf4e15bc90a9098"} Jan 31 04:45:01 crc kubenswrapper[4931]: I0131 04:45:01.754329 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" event={"ID":"9a657e8f-334e-42c0-b96c-85ff2e5e4638","Type":"ContainerStarted","Data":"cf3614475ace4e4633d70fec572a07c44959879c004b44fccef6521f2a5f8e33"} Jan 31 04:45:01 crc kubenswrapper[4931]: I0131 04:45:01.755622 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-r55xt" event={"ID":"7f84f888-9f57-417f-b1d1-1464af1b459c","Type":"ContainerStarted","Data":"b7167b370e5f55be3416d5a60230c9326e03861e13250e8737062c77bbbc9e2e"} Jan 31 04:45:02 crc kubenswrapper[4931]: I0131 04:45:02.765787 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-r55xt" event={"ID":"7f84f888-9f57-417f-b1d1-1464af1b459c","Type":"ContainerStarted","Data":"3b68d38a57d469446d6b3d17f5e7cff25f00348a4bd9a22046aa05bc5d595838"} Jan 31 04:45:02 crc kubenswrapper[4931]: I0131 04:45:02.802638 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-r55xt" podStartSLOduration=2.8026169039999997 podStartE2EDuration="2.802616904s" podCreationTimestamp="2026-01-31 04:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:02.790903189 +0000 UTC m=+1261.600132083" watchObservedRunningTime="2026-01-31 04:45:02.802616904 +0000 UTC m=+1261.611845788" Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.082128 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.169423 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a657e8f-334e-42c0-b96c-85ff2e5e4638-config-volume\") pod \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.170009 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw8gs\" (UniqueName: \"kubernetes.io/projected/9a657e8f-334e-42c0-b96c-85ff2e5e4638-kube-api-access-qw8gs\") pod \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.170063 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a657e8f-334e-42c0-b96c-85ff2e5e4638-secret-volume\") pod \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\" (UID: \"9a657e8f-334e-42c0-b96c-85ff2e5e4638\") " Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.170282 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a657e8f-334e-42c0-b96c-85ff2e5e4638-config-volume" (OuterVolumeSpecName: "config-volume") pod "9a657e8f-334e-42c0-b96c-85ff2e5e4638" (UID: "9a657e8f-334e-42c0-b96c-85ff2e5e4638"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.170650 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a657e8f-334e-42c0-b96c-85ff2e5e4638-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.178900 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a657e8f-334e-42c0-b96c-85ff2e5e4638-kube-api-access-qw8gs" (OuterVolumeSpecName: "kube-api-access-qw8gs") pod "9a657e8f-334e-42c0-b96c-85ff2e5e4638" (UID: "9a657e8f-334e-42c0-b96c-85ff2e5e4638"). InnerVolumeSpecName "kube-api-access-qw8gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.178890 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a657e8f-334e-42c0-b96c-85ff2e5e4638-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9a657e8f-334e-42c0-b96c-85ff2e5e4638" (UID: "9a657e8f-334e-42c0-b96c-85ff2e5e4638"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.271869 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw8gs\" (UniqueName: \"kubernetes.io/projected/9a657e8f-334e-42c0-b96c-85ff2e5e4638-kube-api-access-qw8gs\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.271909 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a657e8f-334e-42c0-b96c-85ff2e5e4638-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.777236 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.777612 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-75rf8" event={"ID":"9a657e8f-334e-42c0-b96c-85ff2e5e4638","Type":"ContainerDied","Data":"cf3614475ace4e4633d70fec572a07c44959879c004b44fccef6521f2a5f8e33"} Jan 31 04:45:03 crc kubenswrapper[4931]: I0131 04:45:03.777636 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf3614475ace4e4633d70fec572a07c44959879c004b44fccef6521f2a5f8e33" Jan 31 04:45:04 crc kubenswrapper[4931]: I0131 04:45:04.787191 4931 generic.go:334] "Generic (PLEG): container finished" podID="7f84f888-9f57-417f-b1d1-1464af1b459c" containerID="3b68d38a57d469446d6b3d17f5e7cff25f00348a4bd9a22046aa05bc5d595838" exitCode=0 Jan 31 04:45:04 crc kubenswrapper[4931]: I0131 04:45:04.787322 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-r55xt" event={"ID":"7f84f888-9f57-417f-b1d1-1464af1b459c","Type":"ContainerDied","Data":"3b68d38a57d469446d6b3d17f5e7cff25f00348a4bd9a22046aa05bc5d595838"} Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.055710 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.214743 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tdgm\" (UniqueName: \"kubernetes.io/projected/7f84f888-9f57-417f-b1d1-1464af1b459c-kube-api-access-5tdgm\") pod \"7f84f888-9f57-417f-b1d1-1464af1b459c\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.214795 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-combined-ca-bundle\") pod \"7f84f888-9f57-417f-b1d1-1464af1b459c\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.214857 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-db-sync-config-data\") pod \"7f84f888-9f57-417f-b1d1-1464af1b459c\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.214908 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-config-data\") pod \"7f84f888-9f57-417f-b1d1-1464af1b459c\" (UID: \"7f84f888-9f57-417f-b1d1-1464af1b459c\") " Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.221100 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f84f888-9f57-417f-b1d1-1464af1b459c-kube-api-access-5tdgm" (OuterVolumeSpecName: "kube-api-access-5tdgm") pod "7f84f888-9f57-417f-b1d1-1464af1b459c" (UID: "7f84f888-9f57-417f-b1d1-1464af1b459c"). InnerVolumeSpecName "kube-api-access-5tdgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.221105 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7f84f888-9f57-417f-b1d1-1464af1b459c" (UID: "7f84f888-9f57-417f-b1d1-1464af1b459c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.238028 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f84f888-9f57-417f-b1d1-1464af1b459c" (UID: "7f84f888-9f57-417f-b1d1-1464af1b459c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.259959 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-config-data" (OuterVolumeSpecName: "config-data") pod "7f84f888-9f57-417f-b1d1-1464af1b459c" (UID: "7f84f888-9f57-417f-b1d1-1464af1b459c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.317143 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tdgm\" (UniqueName: \"kubernetes.io/projected/7f84f888-9f57-417f-b1d1-1464af1b459c-kube-api-access-5tdgm\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.317225 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.317238 4931 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.317248 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f84f888-9f57-417f-b1d1-1464af1b459c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.804569 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-r55xt" event={"ID":"7f84f888-9f57-417f-b1d1-1464af1b459c","Type":"ContainerDied","Data":"b7167b370e5f55be3416d5a60230c9326e03861e13250e8737062c77bbbc9e2e"} Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.804618 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7167b370e5f55be3416d5a60230c9326e03861e13250e8737062c77bbbc9e2e" Jan 31 04:45:06 crc kubenswrapper[4931]: I0131 04:45:06.804701 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-r55xt" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.087870 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:07 crc kubenswrapper[4931]: E0131 04:45:07.088296 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f84f888-9f57-417f-b1d1-1464af1b459c" containerName="glance-db-sync" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.088313 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f84f888-9f57-417f-b1d1-1464af1b459c" containerName="glance-db-sync" Jan 31 04:45:07 crc kubenswrapper[4931]: E0131 04:45:07.088334 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a657e8f-334e-42c0-b96c-85ff2e5e4638" containerName="collect-profiles" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.088342 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a657e8f-334e-42c0-b96c-85ff2e5e4638" containerName="collect-profiles" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.088485 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a657e8f-334e-42c0-b96c-85ff2e5e4638" containerName="collect-profiles" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.088507 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f84f888-9f57-417f-b1d1-1464af1b459c" containerName="glance-db-sync" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.089419 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.093161 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.093548 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.094231 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.094589 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-internal-svc" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.095442 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-xbrvx" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.096651 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-public-svc" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.107916 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.115507 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:07 crc kubenswrapper[4931]: E0131 04:45:07.116343 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-v4hxd logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-v4hxd logs public-tls-certs scripts]: context canceled" pod="glance-kuttl-tests/glance-default-single-0" podUID="cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.230678 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.231088 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-logs\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.231130 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-scripts\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.231360 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-config-data\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.231572 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-httpd-run\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.231602 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4hxd\" (UniqueName: \"kubernetes.io/projected/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-kube-api-access-v4hxd\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.231743 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.231819 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.231916 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333169 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333286 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333310 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-logs\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333344 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-scripts\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333377 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-config-data\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333423 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-httpd-run\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333448 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4hxd\" (UniqueName: \"kubernetes.io/projected/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-kube-api-access-v4hxd\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333474 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333502 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.333633 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.334543 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-logs\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.335102 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-httpd-run\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.338698 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-scripts\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.339327 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-config-data\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.339565 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.340541 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.340854 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.358153 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4hxd\" (UniqueName: \"kubernetes.io/projected/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-kube-api-access-v4hxd\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.380077 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.812374 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.824639 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.943263 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-public-tls-certs\") pod \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.943429 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-config-data\") pod \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.943516 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4hxd\" (UniqueName: \"kubernetes.io/projected/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-kube-api-access-v4hxd\") pod \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.943622 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-logs\") pod \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.943716 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-combined-ca-bundle\") pod \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.943804 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-internal-tls-certs\") pod \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.943849 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-scripts\") pod \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.943928 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-httpd-run\") pod \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.943962 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\" (UID: \"cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3\") " Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.944774 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" (UID: "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.945195 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-logs" (OuterVolumeSpecName: "logs") pod "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" (UID: "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.949321 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-scripts" (OuterVolumeSpecName: "scripts") pod "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" (UID: "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.949371 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" (UID: "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.950368 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-config-data" (OuterVolumeSpecName: "config-data") pod "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" (UID: "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.950363 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" (UID: "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.953861 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" (UID: "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.953889 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" (UID: "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:07 crc kubenswrapper[4931]: I0131 04:45:07.954066 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-kube-api-access-v4hxd" (OuterVolumeSpecName: "kube-api-access-v4hxd") pod "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" (UID: "cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3"). InnerVolumeSpecName "kube-api-access-v4hxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.046667 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.046765 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4hxd\" (UniqueName: \"kubernetes.io/projected/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-kube-api-access-v4hxd\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.046790 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.046809 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.046828 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.046845 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.046895 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.046915 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.046934 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.059527 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.149066 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.820443 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.868949 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:08 crc kubenswrapper[4931]: I0131 04:45:08.870338 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:09 crc kubenswrapper[4931]: I0131 04:45:09.914338 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3" path="/var/lib/kubelet/pods/cf2f6bfe-f6fa-45c8-8cc7-97702bb9a2c3/volumes" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.750195 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.752412 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.758835 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.760227 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.760578 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-xbrvx" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.761172 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.761419 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-public-svc" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.761544 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-internal-svc" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.784953 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.831843 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-httpd-run\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.831993 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-logs\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.832131 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.832184 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.832306 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-scripts\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.832335 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdmr5\" (UniqueName: \"kubernetes.io/projected/06d80fa1-b83e-4b95-999f-adfa1b7e097f-kube-api-access-mdmr5\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.832392 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-config-data\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.832442 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.832460 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.933694 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-scripts\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.933760 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdmr5\" (UniqueName: \"kubernetes.io/projected/06d80fa1-b83e-4b95-999f-adfa1b7e097f-kube-api-access-mdmr5\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.933809 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-config-data\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.933847 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.933864 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.933898 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-httpd-run\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.933924 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-logs\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.933946 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.933983 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.935202 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.935470 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-httpd-run\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.935502 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-logs\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.940800 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-scripts\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.941053 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.942125 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-config-data\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.945453 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.945815 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.956901 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdmr5\" (UniqueName: \"kubernetes.io/projected/06d80fa1-b83e-4b95-999f-adfa1b7e097f-kube-api-access-mdmr5\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:13 crc kubenswrapper[4931]: I0131 04:45:13.963326 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:14 crc kubenswrapper[4931]: I0131 04:45:14.077169 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:14 crc kubenswrapper[4931]: I0131 04:45:14.507708 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:14 crc kubenswrapper[4931]: I0131 04:45:14.873656 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"06d80fa1-b83e-4b95-999f-adfa1b7e097f","Type":"ContainerStarted","Data":"a9d707fb8e3dc0e6403739ca7b099ac2396f3d39b903018157fddd50afba47ba"} Jan 31 04:45:15 crc kubenswrapper[4931]: I0131 04:45:15.889042 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"06d80fa1-b83e-4b95-999f-adfa1b7e097f","Type":"ContainerStarted","Data":"c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb"} Jan 31 04:45:15 crc kubenswrapper[4931]: I0131 04:45:15.889395 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"06d80fa1-b83e-4b95-999f-adfa1b7e097f","Type":"ContainerStarted","Data":"d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a"} Jan 31 04:45:15 crc kubenswrapper[4931]: I0131 04:45:15.913134 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.913111307 podStartE2EDuration="2.913111307s" podCreationTimestamp="2026-01-31 04:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:15.906291493 +0000 UTC m=+1274.715520367" watchObservedRunningTime="2026-01-31 04:45:15.913111307 +0000 UTC m=+1274.722340181" Jan 31 04:45:24 crc kubenswrapper[4931]: I0131 04:45:24.077961 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:24 crc kubenswrapper[4931]: I0131 04:45:24.078558 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:24 crc kubenswrapper[4931]: I0131 04:45:24.111007 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:24 crc kubenswrapper[4931]: I0131 04:45:24.134193 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:24 crc kubenswrapper[4931]: I0131 04:45:24.961200 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:24 crc kubenswrapper[4931]: I0131 04:45:24.961662 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:26 crc kubenswrapper[4931]: I0131 04:45:26.784896 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:26 crc kubenswrapper[4931]: I0131 04:45:26.895174 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.568606 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-r55xt"] Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.576068 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-r55xt"] Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.634208 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.661895 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance08d1-account-delete-jtvd9"] Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.662709 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.679906 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance08d1-account-delete-jtvd9"] Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.785319 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br642\" (UniqueName: \"kubernetes.io/projected/758911f9-eb4d-43da-81ae-9e1d7c1eaf11-kube-api-access-br642\") pod \"glance08d1-account-delete-jtvd9\" (UID: \"758911f9-eb4d-43da-81ae-9e1d7c1eaf11\") " pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.886982 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br642\" (UniqueName: \"kubernetes.io/projected/758911f9-eb4d-43da-81ae-9e1d7c1eaf11-kube-api-access-br642\") pod \"glance08d1-account-delete-jtvd9\" (UID: \"758911f9-eb4d-43da-81ae-9e1d7c1eaf11\") " pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.905043 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br642\" (UniqueName: \"kubernetes.io/projected/758911f9-eb4d-43da-81ae-9e1d7c1eaf11-kube-api-access-br642\") pod \"glance08d1-account-delete-jtvd9\" (UID: \"758911f9-eb4d-43da-81ae-9e1d7c1eaf11\") " pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.982185 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.993572 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerName="glance-log" containerID="cri-o://d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a" gracePeriod=30 Jan 31 04:45:28 crc kubenswrapper[4931]: I0131 04:45:28.993697 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerName="glance-httpd" containerID="cri-o://c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb" gracePeriod=30 Jan 31 04:45:29 crc kubenswrapper[4931]: I0131 04:45:29.430755 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance08d1-account-delete-jtvd9"] Jan 31 04:45:29 crc kubenswrapper[4931]: W0131 04:45:29.438048 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod758911f9_eb4d_43da_81ae_9e1d7c1eaf11.slice/crio-6f9c2cea5286c107da52288e7dbc006fd21539fce7a44539720bec72260dd2be WatchSource:0}: Error finding container 6f9c2cea5286c107da52288e7dbc006fd21539fce7a44539720bec72260dd2be: Status 404 returned error can't find the container with id 6f9c2cea5286c107da52288e7dbc006fd21539fce7a44539720bec72260dd2be Jan 31 04:45:29 crc kubenswrapper[4931]: I0131 04:45:29.904823 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f84f888-9f57-417f-b1d1-1464af1b459c" path="/var/lib/kubelet/pods/7f84f888-9f57-417f-b1d1-1464af1b459c/volumes" Jan 31 04:45:30 crc kubenswrapper[4931]: I0131 04:45:30.001567 4931 generic.go:334] "Generic (PLEG): container finished" podID="758911f9-eb4d-43da-81ae-9e1d7c1eaf11" containerID="a5a5ef56818419e412ef08b28e6a2d5c4791167bf0d9f38620af8119ac8bf8f3" exitCode=0 Jan 31 04:45:30 crc kubenswrapper[4931]: I0131 04:45:30.001652 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" event={"ID":"758911f9-eb4d-43da-81ae-9e1d7c1eaf11","Type":"ContainerDied","Data":"a5a5ef56818419e412ef08b28e6a2d5c4791167bf0d9f38620af8119ac8bf8f3"} Jan 31 04:45:30 crc kubenswrapper[4931]: I0131 04:45:30.001688 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" event={"ID":"758911f9-eb4d-43da-81ae-9e1d7c1eaf11","Type":"ContainerStarted","Data":"6f9c2cea5286c107da52288e7dbc006fd21539fce7a44539720bec72260dd2be"} Jan 31 04:45:30 crc kubenswrapper[4931]: I0131 04:45:30.005288 4931 generic.go:334] "Generic (PLEG): container finished" podID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerID="d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a" exitCode=143 Jan 31 04:45:30 crc kubenswrapper[4931]: I0131 04:45:30.005333 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"06d80fa1-b83e-4b95-999f-adfa1b7e097f","Type":"ContainerDied","Data":"d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a"} Jan 31 04:45:31 crc kubenswrapper[4931]: I0131 04:45:31.297940 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" Jan 31 04:45:31 crc kubenswrapper[4931]: I0131 04:45:31.421076 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br642\" (UniqueName: \"kubernetes.io/projected/758911f9-eb4d-43da-81ae-9e1d7c1eaf11-kube-api-access-br642\") pod \"758911f9-eb4d-43da-81ae-9e1d7c1eaf11\" (UID: \"758911f9-eb4d-43da-81ae-9e1d7c1eaf11\") " Jan 31 04:45:31 crc kubenswrapper[4931]: I0131 04:45:31.442141 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758911f9-eb4d-43da-81ae-9e1d7c1eaf11-kube-api-access-br642" (OuterVolumeSpecName: "kube-api-access-br642") pod "758911f9-eb4d-43da-81ae-9e1d7c1eaf11" (UID: "758911f9-eb4d-43da-81ae-9e1d7c1eaf11"). InnerVolumeSpecName "kube-api-access-br642". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:31 crc kubenswrapper[4931]: I0131 04:45:31.522716 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br642\" (UniqueName: \"kubernetes.io/projected/758911f9-eb4d-43da-81ae-9e1d7c1eaf11-kube-api-access-br642\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.027552 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" event={"ID":"758911f9-eb4d-43da-81ae-9e1d7c1eaf11","Type":"ContainerDied","Data":"6f9c2cea5286c107da52288e7dbc006fd21539fce7a44539720bec72260dd2be"} Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.027615 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9c2cea5286c107da52288e7dbc006fd21539fce7a44539720bec72260dd2be" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.027646 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance08d1-account-delete-jtvd9" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.459598 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.541672 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.541823 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-logs\") pod \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.541879 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-config-data\") pod \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.542377 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-logs" (OuterVolumeSpecName: "logs") pod "06d80fa1-b83e-4b95-999f-adfa1b7e097f" (UID: "06d80fa1-b83e-4b95-999f-adfa1b7e097f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.542498 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-combined-ca-bundle\") pod \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.542540 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-httpd-run\") pod \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.542558 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-scripts\") pod \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.542588 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-public-tls-certs\") pod \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.542619 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-internal-tls-certs\") pod \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.542650 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdmr5\" (UniqueName: \"kubernetes.io/projected/06d80fa1-b83e-4b95-999f-adfa1b7e097f-kube-api-access-mdmr5\") pod \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\" (UID: \"06d80fa1-b83e-4b95-999f-adfa1b7e097f\") " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.542984 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.543008 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "06d80fa1-b83e-4b95-999f-adfa1b7e097f" (UID: "06d80fa1-b83e-4b95-999f-adfa1b7e097f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.546478 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "06d80fa1-b83e-4b95-999f-adfa1b7e097f" (UID: "06d80fa1-b83e-4b95-999f-adfa1b7e097f"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.548267 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d80fa1-b83e-4b95-999f-adfa1b7e097f-kube-api-access-mdmr5" (OuterVolumeSpecName: "kube-api-access-mdmr5") pod "06d80fa1-b83e-4b95-999f-adfa1b7e097f" (UID: "06d80fa1-b83e-4b95-999f-adfa1b7e097f"). InnerVolumeSpecName "kube-api-access-mdmr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.563669 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-scripts" (OuterVolumeSpecName: "scripts") pod "06d80fa1-b83e-4b95-999f-adfa1b7e097f" (UID: "06d80fa1-b83e-4b95-999f-adfa1b7e097f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.577134 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06d80fa1-b83e-4b95-999f-adfa1b7e097f" (UID: "06d80fa1-b83e-4b95-999f-adfa1b7e097f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.589615 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-config-data" (OuterVolumeSpecName: "config-data") pod "06d80fa1-b83e-4b95-999f-adfa1b7e097f" (UID: "06d80fa1-b83e-4b95-999f-adfa1b7e097f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.606409 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "06d80fa1-b83e-4b95-999f-adfa1b7e097f" (UID: "06d80fa1-b83e-4b95-999f-adfa1b7e097f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.611568 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "06d80fa1-b83e-4b95-999f-adfa1b7e097f" (UID: "06d80fa1-b83e-4b95-999f-adfa1b7e097f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.645095 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.645391 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06d80fa1-b83e-4b95-999f-adfa1b7e097f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.645401 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.645410 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.645420 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.645430 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdmr5\" (UniqueName: \"kubernetes.io/projected/06d80fa1-b83e-4b95-999f-adfa1b7e097f-kube-api-access-mdmr5\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.645467 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.645477 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d80fa1-b83e-4b95-999f-adfa1b7e097f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.665254 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 04:45:32 crc kubenswrapper[4931]: I0131 04:45:32.747281 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.037933 4931 generic.go:334] "Generic (PLEG): container finished" podID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerID="c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb" exitCode=0 Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.037977 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"06d80fa1-b83e-4b95-999f-adfa1b7e097f","Type":"ContainerDied","Data":"c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb"} Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.038009 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"06d80fa1-b83e-4b95-999f-adfa1b7e097f","Type":"ContainerDied","Data":"a9d707fb8e3dc0e6403739ca7b099ac2396f3d39b903018157fddd50afba47ba"} Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.038026 4931 scope.go:117] "RemoveContainer" containerID="c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.038032 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.056838 4931 scope.go:117] "RemoveContainer" containerID="d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.077385 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.088031 4931 scope.go:117] "RemoveContainer" containerID="c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb" Jan 31 04:45:33 crc kubenswrapper[4931]: E0131 04:45:33.088626 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb\": container with ID starting with c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb not found: ID does not exist" containerID="c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.088658 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb"} err="failed to get container status \"c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb\": rpc error: code = NotFound desc = could not find container \"c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb\": container with ID starting with c70a8c9a6c576ef495a03eb3a0dfdbfcd12cea891f1d68c4d068404023fa55bb not found: ID does not exist" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.088690 4931 scope.go:117] "RemoveContainer" containerID="d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a" Jan 31 04:45:33 crc kubenswrapper[4931]: E0131 04:45:33.089224 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a\": container with ID starting with d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a not found: ID does not exist" containerID="d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.089265 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a"} err="failed to get container status \"d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a\": rpc error: code = NotFound desc = could not find container \"d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a\": container with ID starting with d92cb85b126c549eb60cc481c99e4d3fe9b8499d7b6eabf34fbcfbcf0cfe956a not found: ID does not exist" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.089883 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.710289 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-46kjl"] Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.718523 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-46kjl"] Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.724656 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance08d1-account-delete-jtvd9"] Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.732171 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-08d1-account-create-hb5c6"] Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.738608 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance08d1-account-delete-jtvd9"] Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.746358 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-08d1-account-create-hb5c6"] Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.909816 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" path="/var/lib/kubelet/pods/06d80fa1-b83e-4b95-999f-adfa1b7e097f/volumes" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.911697 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="758911f9-eb4d-43da-81ae-9e1d7c1eaf11" path="/var/lib/kubelet/pods/758911f9-eb4d-43da-81ae-9e1d7c1eaf11/volumes" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.912901 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcc69ea-531f-4691-9766-18b1cf9e8035" path="/var/lib/kubelet/pods/cdcc69ea-531f-4691-9766-18b1cf9e8035/volumes" Jan 31 04:45:33 crc kubenswrapper[4931]: I0131 04:45:33.914181 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad5a09c-7878-4f94-ad3f-fc3ee9253186" path="/var/lib/kubelet/pods/dad5a09c-7878-4f94-ad3f-fc3ee9253186/volumes" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.409580 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-zbsb7"] Jan 31 04:45:34 crc kubenswrapper[4931]: E0131 04:45:34.409932 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758911f9-eb4d-43da-81ae-9e1d7c1eaf11" containerName="mariadb-account-delete" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.409953 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="758911f9-eb4d-43da-81ae-9e1d7c1eaf11" containerName="mariadb-account-delete" Jan 31 04:45:34 crc kubenswrapper[4931]: E0131 04:45:34.409973 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerName="glance-log" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.409984 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerName="glance-log" Jan 31 04:45:34 crc kubenswrapper[4931]: E0131 04:45:34.410014 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerName="glance-httpd" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.410024 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerName="glance-httpd" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.410171 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerName="glance-log" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.410204 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d80fa1-b83e-4b95-999f-adfa1b7e097f" containerName="glance-httpd" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.410219 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="758911f9-eb4d-43da-81ae-9e1d7c1eaf11" containerName="mariadb-account-delete" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.410763 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-zbsb7" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.418759 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-zbsb7"] Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.472696 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f92w7\" (UniqueName: \"kubernetes.io/projected/b1df0a92-18a7-4cf4-9577-c94a66b3ca3c-kube-api-access-f92w7\") pod \"glance-db-create-zbsb7\" (UID: \"b1df0a92-18a7-4cf4-9577-c94a66b3ca3c\") " pod="glance-kuttl-tests/glance-db-create-zbsb7" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.574621 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f92w7\" (UniqueName: \"kubernetes.io/projected/b1df0a92-18a7-4cf4-9577-c94a66b3ca3c-kube-api-access-f92w7\") pod \"glance-db-create-zbsb7\" (UID: \"b1df0a92-18a7-4cf4-9577-c94a66b3ca3c\") " pod="glance-kuttl-tests/glance-db-create-zbsb7" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.591809 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f92w7\" (UniqueName: \"kubernetes.io/projected/b1df0a92-18a7-4cf4-9577-c94a66b3ca3c-kube-api-access-f92w7\") pod \"glance-db-create-zbsb7\" (UID: \"b1df0a92-18a7-4cf4-9577-c94a66b3ca3c\") " pod="glance-kuttl-tests/glance-db-create-zbsb7" Jan 31 04:45:34 crc kubenswrapper[4931]: I0131 04:45:34.733661 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-zbsb7" Jan 31 04:45:35 crc kubenswrapper[4931]: I0131 04:45:35.185813 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-zbsb7"] Jan 31 04:45:35 crc kubenswrapper[4931]: W0131 04:45:35.198969 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1df0a92_18a7_4cf4_9577_c94a66b3ca3c.slice/crio-c727c9c9c2805ab4e74b2a801d37f748dda1a2d39eaa9306d0baa692636b5860 WatchSource:0}: Error finding container c727c9c9c2805ab4e74b2a801d37f748dda1a2d39eaa9306d0baa692636b5860: Status 404 returned error can't find the container with id c727c9c9c2805ab4e74b2a801d37f748dda1a2d39eaa9306d0baa692636b5860 Jan 31 04:45:36 crc kubenswrapper[4931]: I0131 04:45:36.072396 4931 generic.go:334] "Generic (PLEG): container finished" podID="b1df0a92-18a7-4cf4-9577-c94a66b3ca3c" containerID="89709f3a55a91267273b5d27d43306d68987b4bc0cdbf1a13c36292caa6af38c" exitCode=0 Jan 31 04:45:36 crc kubenswrapper[4931]: I0131 04:45:36.072455 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-zbsb7" event={"ID":"b1df0a92-18a7-4cf4-9577-c94a66b3ca3c","Type":"ContainerDied","Data":"89709f3a55a91267273b5d27d43306d68987b4bc0cdbf1a13c36292caa6af38c"} Jan 31 04:45:36 crc kubenswrapper[4931]: I0131 04:45:36.072797 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-zbsb7" event={"ID":"b1df0a92-18a7-4cf4-9577-c94a66b3ca3c","Type":"ContainerStarted","Data":"c727c9c9c2805ab4e74b2a801d37f748dda1a2d39eaa9306d0baa692636b5860"} Jan 31 04:45:37 crc kubenswrapper[4931]: I0131 04:45:37.411595 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-zbsb7" Jan 31 04:45:37 crc kubenswrapper[4931]: I0131 04:45:37.614036 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f92w7\" (UniqueName: \"kubernetes.io/projected/b1df0a92-18a7-4cf4-9577-c94a66b3ca3c-kube-api-access-f92w7\") pod \"b1df0a92-18a7-4cf4-9577-c94a66b3ca3c\" (UID: \"b1df0a92-18a7-4cf4-9577-c94a66b3ca3c\") " Jan 31 04:45:37 crc kubenswrapper[4931]: I0131 04:45:37.619525 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1df0a92-18a7-4cf4-9577-c94a66b3ca3c-kube-api-access-f92w7" (OuterVolumeSpecName: "kube-api-access-f92w7") pod "b1df0a92-18a7-4cf4-9577-c94a66b3ca3c" (UID: "b1df0a92-18a7-4cf4-9577-c94a66b3ca3c"). InnerVolumeSpecName "kube-api-access-f92w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:37 crc kubenswrapper[4931]: I0131 04:45:37.715492 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f92w7\" (UniqueName: \"kubernetes.io/projected/b1df0a92-18a7-4cf4-9577-c94a66b3ca3c-kube-api-access-f92w7\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:38 crc kubenswrapper[4931]: I0131 04:45:38.092339 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-zbsb7" event={"ID":"b1df0a92-18a7-4cf4-9577-c94a66b3ca3c","Type":"ContainerDied","Data":"c727c9c9c2805ab4e74b2a801d37f748dda1a2d39eaa9306d0baa692636b5860"} Jan 31 04:45:38 crc kubenswrapper[4931]: I0131 04:45:38.092381 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c727c9c9c2805ab4e74b2a801d37f748dda1a2d39eaa9306d0baa692636b5860" Jan 31 04:45:38 crc kubenswrapper[4931]: I0131 04:45:38.092443 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-zbsb7" Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.438829 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-2502-account-create-jn4hj"] Jan 31 04:45:44 crc kubenswrapper[4931]: E0131 04:45:44.440135 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1df0a92-18a7-4cf4-9577-c94a66b3ca3c" containerName="mariadb-database-create" Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.440166 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1df0a92-18a7-4cf4-9577-c94a66b3ca3c" containerName="mariadb-database-create" Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.440508 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1df0a92-18a7-4cf4-9577-c94a66b3ca3c" containerName="mariadb-database-create" Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.441445 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.444293 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.446621 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-2502-account-create-jn4hj"] Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.621167 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzx2v\" (UniqueName: \"kubernetes.io/projected/c4368258-0f40-42db-a1ab-c50f30518a4e-kube-api-access-rzx2v\") pod \"glance-2502-account-create-jn4hj\" (UID: \"c4368258-0f40-42db-a1ab-c50f30518a4e\") " pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.722503 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzx2v\" (UniqueName: \"kubernetes.io/projected/c4368258-0f40-42db-a1ab-c50f30518a4e-kube-api-access-rzx2v\") pod \"glance-2502-account-create-jn4hj\" (UID: \"c4368258-0f40-42db-a1ab-c50f30518a4e\") " pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.749494 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzx2v\" (UniqueName: \"kubernetes.io/projected/c4368258-0f40-42db-a1ab-c50f30518a4e-kube-api-access-rzx2v\") pod \"glance-2502-account-create-jn4hj\" (UID: \"c4368258-0f40-42db-a1ab-c50f30518a4e\") " pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" Jan 31 04:45:44 crc kubenswrapper[4931]: I0131 04:45:44.768367 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" Jan 31 04:45:45 crc kubenswrapper[4931]: I0131 04:45:45.226633 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-2502-account-create-jn4hj"] Jan 31 04:45:46 crc kubenswrapper[4931]: I0131 04:45:46.155573 4931 generic.go:334] "Generic (PLEG): container finished" podID="c4368258-0f40-42db-a1ab-c50f30518a4e" containerID="daafcd839dd86300149d0cab96231384784a9ced60a536ab559836edf1fd3af0" exitCode=0 Jan 31 04:45:46 crc kubenswrapper[4931]: I0131 04:45:46.155633 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" event={"ID":"c4368258-0f40-42db-a1ab-c50f30518a4e","Type":"ContainerDied","Data":"daafcd839dd86300149d0cab96231384784a9ced60a536ab559836edf1fd3af0"} Jan 31 04:45:46 crc kubenswrapper[4931]: I0131 04:45:46.155926 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" event={"ID":"c4368258-0f40-42db-a1ab-c50f30518a4e","Type":"ContainerStarted","Data":"04f19575fc40b95419e78d8ff2f7e8750975f8ad6f29385f2b2a892beaca7e69"} Jan 31 04:45:47 crc kubenswrapper[4931]: I0131 04:45:47.474713 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" Jan 31 04:45:47 crc kubenswrapper[4931]: I0131 04:45:47.667509 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzx2v\" (UniqueName: \"kubernetes.io/projected/c4368258-0f40-42db-a1ab-c50f30518a4e-kube-api-access-rzx2v\") pod \"c4368258-0f40-42db-a1ab-c50f30518a4e\" (UID: \"c4368258-0f40-42db-a1ab-c50f30518a4e\") " Jan 31 04:45:47 crc kubenswrapper[4931]: I0131 04:45:47.677864 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4368258-0f40-42db-a1ab-c50f30518a4e-kube-api-access-rzx2v" (OuterVolumeSpecName: "kube-api-access-rzx2v") pod "c4368258-0f40-42db-a1ab-c50f30518a4e" (UID: "c4368258-0f40-42db-a1ab-c50f30518a4e"). InnerVolumeSpecName "kube-api-access-rzx2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:47 crc kubenswrapper[4931]: I0131 04:45:47.769990 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzx2v\" (UniqueName: \"kubernetes.io/projected/c4368258-0f40-42db-a1ab-c50f30518a4e-kube-api-access-rzx2v\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:48 crc kubenswrapper[4931]: I0131 04:45:48.179377 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" event={"ID":"c4368258-0f40-42db-a1ab-c50f30518a4e","Type":"ContainerDied","Data":"04f19575fc40b95419e78d8ff2f7e8750975f8ad6f29385f2b2a892beaca7e69"} Jan 31 04:45:48 crc kubenswrapper[4931]: I0131 04:45:48.179421 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04f19575fc40b95419e78d8ff2f7e8750975f8ad6f29385f2b2a892beaca7e69" Jan 31 04:45:48 crc kubenswrapper[4931]: I0131 04:45:48.179959 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-2502-account-create-jn4hj" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.571369 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-lf6qp"] Jan 31 04:45:49 crc kubenswrapper[4931]: E0131 04:45:49.572035 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4368258-0f40-42db-a1ab-c50f30518a4e" containerName="mariadb-account-create" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.572052 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4368258-0f40-42db-a1ab-c50f30518a4e" containerName="mariadb-account-create" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.572243 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4368258-0f40-42db-a1ab-c50f30518a4e" containerName="mariadb-account-create" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.572864 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.575177 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.575343 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-4b9hw" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.582884 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-lf6qp"] Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.696835 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-db-sync-config-data\") pod \"glance-db-sync-lf6qp\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.696921 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw8vx\" (UniqueName: \"kubernetes.io/projected/7d392663-51d0-4a1d-88ef-ade68860d06a-kube-api-access-sw8vx\") pod \"glance-db-sync-lf6qp\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.696956 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-config-data\") pod \"glance-db-sync-lf6qp\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.798697 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw8vx\" (UniqueName: \"kubernetes.io/projected/7d392663-51d0-4a1d-88ef-ade68860d06a-kube-api-access-sw8vx\") pod \"glance-db-sync-lf6qp\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.798765 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-config-data\") pod \"glance-db-sync-lf6qp\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.798842 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-db-sync-config-data\") pod \"glance-db-sync-lf6qp\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.804501 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-db-sync-config-data\") pod \"glance-db-sync-lf6qp\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.804746 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-config-data\") pod \"glance-db-sync-lf6qp\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.814636 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw8vx\" (UniqueName: \"kubernetes.io/projected/7d392663-51d0-4a1d-88ef-ade68860d06a-kube-api-access-sw8vx\") pod \"glance-db-sync-lf6qp\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:49 crc kubenswrapper[4931]: I0131 04:45:49.895788 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:50 crc kubenswrapper[4931]: I0131 04:45:50.326667 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-lf6qp"] Jan 31 04:45:51 crc kubenswrapper[4931]: I0131 04:45:51.133165 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:45:51 crc kubenswrapper[4931]: I0131 04:45:51.133506 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:45:51 crc kubenswrapper[4931]: I0131 04:45:51.207654 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-lf6qp" event={"ID":"7d392663-51d0-4a1d-88ef-ade68860d06a","Type":"ContainerStarted","Data":"687fb00739344b8b2ff681b72260259e5da90ff282675d01548b950d09d667ab"} Jan 31 04:45:51 crc kubenswrapper[4931]: I0131 04:45:51.207694 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-lf6qp" event={"ID":"7d392663-51d0-4a1d-88ef-ade68860d06a","Type":"ContainerStarted","Data":"7dc9fb49e7f7a59d09daff8c7aac3f366725a7469ebcfaf0f845145852f5671d"} Jan 31 04:45:51 crc kubenswrapper[4931]: I0131 04:45:51.228799 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-lf6qp" podStartSLOduration=2.2287757 podStartE2EDuration="2.2287757s" podCreationTimestamp="2026-01-31 04:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:51.227374363 +0000 UTC m=+1310.036603247" watchObservedRunningTime="2026-01-31 04:45:51.2287757 +0000 UTC m=+1310.038004574" Jan 31 04:45:54 crc kubenswrapper[4931]: I0131 04:45:54.228842 4931 generic.go:334] "Generic (PLEG): container finished" podID="7d392663-51d0-4a1d-88ef-ade68860d06a" containerID="687fb00739344b8b2ff681b72260259e5da90ff282675d01548b950d09d667ab" exitCode=0 Jan 31 04:45:54 crc kubenswrapper[4931]: I0131 04:45:54.228922 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-lf6qp" event={"ID":"7d392663-51d0-4a1d-88ef-ade68860d06a","Type":"ContainerDied","Data":"687fb00739344b8b2ff681b72260259e5da90ff282675d01548b950d09d667ab"} Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.506592 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.592082 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-db-sync-config-data\") pod \"7d392663-51d0-4a1d-88ef-ade68860d06a\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.592193 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw8vx\" (UniqueName: \"kubernetes.io/projected/7d392663-51d0-4a1d-88ef-ade68860d06a-kube-api-access-sw8vx\") pod \"7d392663-51d0-4a1d-88ef-ade68860d06a\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.592233 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-config-data\") pod \"7d392663-51d0-4a1d-88ef-ade68860d06a\" (UID: \"7d392663-51d0-4a1d-88ef-ade68860d06a\") " Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.599009 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7d392663-51d0-4a1d-88ef-ade68860d06a" (UID: "7d392663-51d0-4a1d-88ef-ade68860d06a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.599233 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d392663-51d0-4a1d-88ef-ade68860d06a-kube-api-access-sw8vx" (OuterVolumeSpecName: "kube-api-access-sw8vx") pod "7d392663-51d0-4a1d-88ef-ade68860d06a" (UID: "7d392663-51d0-4a1d-88ef-ade68860d06a"). InnerVolumeSpecName "kube-api-access-sw8vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.641901 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-config-data" (OuterVolumeSpecName: "config-data") pod "7d392663-51d0-4a1d-88ef-ade68860d06a" (UID: "7d392663-51d0-4a1d-88ef-ade68860d06a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.694642 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw8vx\" (UniqueName: \"kubernetes.io/projected/7d392663-51d0-4a1d-88ef-ade68860d06a-kube-api-access-sw8vx\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.694699 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:55 crc kubenswrapper[4931]: I0131 04:45:55.694752 4931 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d392663-51d0-4a1d-88ef-ade68860d06a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:56 crc kubenswrapper[4931]: I0131 04:45:56.246932 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-lf6qp" event={"ID":"7d392663-51d0-4a1d-88ef-ade68860d06a","Type":"ContainerDied","Data":"7dc9fb49e7f7a59d09daff8c7aac3f366725a7469ebcfaf0f845145852f5671d"} Jan 31 04:45:56 crc kubenswrapper[4931]: I0131 04:45:56.247004 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dc9fb49e7f7a59d09daff8c7aac3f366725a7469ebcfaf0f845145852f5671d" Jan 31 04:45:56 crc kubenswrapper[4931]: I0131 04:45:56.247023 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-lf6qp" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.419588 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 04:45:57 crc kubenswrapper[4931]: E0131 04:45:57.420171 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d392663-51d0-4a1d-88ef-ade68860d06a" containerName="glance-db-sync" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.420183 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d392663-51d0-4a1d-88ef-ade68860d06a" containerName="glance-db-sync" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.420308 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d392663-51d0-4a1d-88ef-ade68860d06a" containerName="glance-db-sync" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.421310 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.423059 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-4b9hw" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.423246 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.423907 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.435840 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524527 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524581 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524601 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524620 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxjmh\" (UniqueName: \"kubernetes.io/projected/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-kube-api-access-gxjmh\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524646 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-dev\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524671 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524687 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524777 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524803 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524833 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-run\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524908 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-logs\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524935 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-sys\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.524992 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.525013 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625431 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625502 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-run\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625552 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-logs\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625576 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-sys\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625607 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625634 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625670 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625692 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-run\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625744 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625696 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-sys\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625709 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625809 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625816 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625833 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxjmh\" (UniqueName: \"kubernetes.io/projected/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-kube-api-access-gxjmh\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625855 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-dev\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625861 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625881 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625909 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625949 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.625978 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.626011 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.626013 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-logs\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.626048 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.626072 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.626050 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-dev\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.640042 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.643951 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.644078 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxjmh\" (UniqueName: \"kubernetes.io/projected/6e5cb183-3a6f-4601-848c-f7af7ff21a9e-kube-api-access-gxjmh\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.649331 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.651619 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6e5cb183-3a6f-4601-848c-f7af7ff21a9e\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.670616 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.672202 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.674310 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.682114 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.744673 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828323 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-dev\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828400 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828428 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828457 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z66ht\" (UniqueName: \"kubernetes.io/projected/6ca34ce7-bc38-4ea4-9131-132809cb355b-kube-api-access-z66ht\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828487 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828508 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828530 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828552 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828567 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828602 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828620 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-run\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828642 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828669 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-logs\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.828693 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-sys\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930282 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-logs\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930529 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-sys\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930566 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-dev\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930596 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930618 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930642 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z66ht\" (UniqueName: \"kubernetes.io/projected/6ca34ce7-bc38-4ea4-9131-132809cb355b-kube-api-access-z66ht\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930661 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930680 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930698 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930733 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930751 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930784 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930797 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-run\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930819 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930887 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930923 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-dev\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.930943 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.931041 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-logs\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.931315 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.931330 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.931377 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.931559 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.931827 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-sys\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.931836 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-run\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.931940 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.946539 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.947321 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.966018 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.971522 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:57 crc kubenswrapper[4931]: I0131 04:45:57.978971 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z66ht\" (UniqueName: \"kubernetes.io/projected/6ca34ce7-bc38-4ea4-9131-132809cb355b-kube-api-access-z66ht\") pod \"glance-default-internal-api-0\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:58 crc kubenswrapper[4931]: I0131 04:45:58.016586 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:58 crc kubenswrapper[4931]: I0131 04:45:58.231962 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 04:45:58 crc kubenswrapper[4931]: I0131 04:45:58.253953 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 04:45:58 crc kubenswrapper[4931]: W0131 04:45:58.257030 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ca34ce7_bc38_4ea4_9131_132809cb355b.slice/crio-16a7ba874e41b3efbc34a3681717f848a7e7ab5e2e57f5454fb0be185cfb5230 WatchSource:0}: Error finding container 16a7ba874e41b3efbc34a3681717f848a7e7ab5e2e57f5454fb0be185cfb5230: Status 404 returned error can't find the container with id 16a7ba874e41b3efbc34a3681717f848a7e7ab5e2e57f5454fb0be185cfb5230 Jan 31 04:45:58 crc kubenswrapper[4931]: I0131 04:45:58.269917 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 04:45:58 crc kubenswrapper[4931]: I0131 04:45:58.270981 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"6e5cb183-3a6f-4601-848c-f7af7ff21a9e","Type":"ContainerStarted","Data":"95112c5cfa76be1b963edf67107a14ab68248a88b2b50c4085475ff3c6b0c602"} Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.279906 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"6ca34ce7-bc38-4ea4-9131-132809cb355b","Type":"ContainerStarted","Data":"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f"} Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.280546 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"6ca34ce7-bc38-4ea4-9131-132809cb355b","Type":"ContainerStarted","Data":"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8"} Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.280564 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"6ca34ce7-bc38-4ea4-9131-132809cb355b","Type":"ContainerStarted","Data":"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98"} Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.280577 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"6ca34ce7-bc38-4ea4-9131-132809cb355b","Type":"ContainerStarted","Data":"16a7ba874e41b3efbc34a3681717f848a7e7ab5e2e57f5454fb0be185cfb5230"} Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.280707 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-log" containerID="cri-o://69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98" gracePeriod=30 Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.281208 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-api" containerID="cri-o://23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f" gracePeriod=30 Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.281250 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-httpd" containerID="cri-o://03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8" gracePeriod=30 Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.284377 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"6e5cb183-3a6f-4601-848c-f7af7ff21a9e","Type":"ContainerStarted","Data":"9538d45af96adff2585ac75a0541de9e12137476e709d461bfee9daf9f210278"} Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.284412 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"6e5cb183-3a6f-4601-848c-f7af7ff21a9e","Type":"ContainerStarted","Data":"ff3712455a476fef9199ad08b31e1b46d2a274063f0da7d8c69d94e1df96cedc"} Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.284424 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"6e5cb183-3a6f-4601-848c-f7af7ff21a9e","Type":"ContainerStarted","Data":"e9d9a5e9ba343e07b59159d6ec08309084fd5bc41edbc4820a1d48b6544147a4"} Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.342845 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.342825802 podStartE2EDuration="2.342825802s" podCreationTimestamp="2026-01-31 04:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:59.34200283 +0000 UTC m=+1318.151231724" watchObservedRunningTime="2026-01-31 04:45:59.342825802 +0000 UTC m=+1318.152054696" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.347637 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=3.347616301 podStartE2EDuration="3.347616301s" podCreationTimestamp="2026-01-31 04:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:59.315037014 +0000 UTC m=+1318.124265888" watchObservedRunningTime="2026-01-31 04:45:59.347616301 +0000 UTC m=+1318.156845175" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.711948 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861087 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-var-locks-brick\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861147 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-sys\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861174 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-dev\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861189 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861243 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-config-data\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861248 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-sys" (OuterVolumeSpecName: "sys") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861272 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-dev" (OuterVolumeSpecName: "dev") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861278 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861302 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-nvme\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861332 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-logs\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861379 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-lib-modules\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861445 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z66ht\" (UniqueName: \"kubernetes.io/projected/6ca34ce7-bc38-4ea4-9131-132809cb355b-kube-api-access-z66ht\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861501 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-httpd-run\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861530 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861557 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-iscsi\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861584 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-run\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.861613 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-scripts\") pod \"6ca34ce7-bc38-4ea4-9131-132809cb355b\" (UID: \"6ca34ce7-bc38-4ea4-9131-132809cb355b\") " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.862017 4931 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-dev\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.862066 4931 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.862083 4931 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-sys\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.862064 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.862089 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.862143 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.862159 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-run" (OuterVolumeSpecName: "run") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.862801 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.862981 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-logs" (OuterVolumeSpecName: "logs") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.869099 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca34ce7-bc38-4ea4-9131-132809cb355b-kube-api-access-z66ht" (OuterVolumeSpecName: "kube-api-access-z66ht") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "kube-api-access-z66ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.872900 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.879358 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.881645 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-scripts" (OuterVolumeSpecName: "scripts") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963321 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963375 4931 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963390 4931 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963403 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963423 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963440 4931 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963452 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963463 4931 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ca34ce7-bc38-4ea4-9131-132809cb355b-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963475 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z66ht\" (UniqueName: \"kubernetes.io/projected/6ca34ce7-bc38-4ea4-9131-132809cb355b-kube-api-access-z66ht\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.963488 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ca34ce7-bc38-4ea4-9131-132809cb355b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.977232 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-config-data" (OuterVolumeSpecName: "config-data") pod "6ca34ce7-bc38-4ea4-9131-132809cb355b" (UID: "6ca34ce7-bc38-4ea4-9131-132809cb355b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.981137 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 04:45:59 crc kubenswrapper[4931]: I0131 04:45:59.981395 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.065520 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca34ce7-bc38-4ea4-9131-132809cb355b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.065562 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.065573 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.295549 4931 generic.go:334] "Generic (PLEG): container finished" podID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerID="23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f" exitCode=143 Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.295581 4931 generic.go:334] "Generic (PLEG): container finished" podID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerID="03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8" exitCode=143 Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.295594 4931 generic.go:334] "Generic (PLEG): container finished" podID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerID="69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98" exitCode=143 Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.296418 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.298874 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"6ca34ce7-bc38-4ea4-9131-132809cb355b","Type":"ContainerDied","Data":"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f"} Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.298911 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"6ca34ce7-bc38-4ea4-9131-132809cb355b","Type":"ContainerDied","Data":"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8"} Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.298922 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"6ca34ce7-bc38-4ea4-9131-132809cb355b","Type":"ContainerDied","Data":"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98"} Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.298931 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"6ca34ce7-bc38-4ea4-9131-132809cb355b","Type":"ContainerDied","Data":"16a7ba874e41b3efbc34a3681717f848a7e7ab5e2e57f5454fb0be185cfb5230"} Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.298945 4931 scope.go:117] "RemoveContainer" containerID="23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.319339 4931 scope.go:117] "RemoveContainer" containerID="03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.335068 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.340362 4931 scope.go:117] "RemoveContainer" containerID="69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.341494 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.362881 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 04:46:00 crc kubenswrapper[4931]: E0131 04:46:00.363994 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-log" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.364034 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-log" Jan 31 04:46:00 crc kubenswrapper[4931]: E0131 04:46:00.364057 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-api" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.364067 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-api" Jan 31 04:46:00 crc kubenswrapper[4931]: E0131 04:46:00.364080 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-httpd" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.364110 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-httpd" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.364359 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-api" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.364383 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-httpd" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.364399 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" containerName="glance-log" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.368822 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.377692 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.381778 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.403275 4931 scope.go:117] "RemoveContainer" containerID="23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f" Jan 31 04:46:00 crc kubenswrapper[4931]: E0131 04:46:00.405872 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f\": container with ID starting with 23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f not found: ID does not exist" containerID="23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.405935 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f"} err="failed to get container status \"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f\": rpc error: code = NotFound desc = could not find container \"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f\": container with ID starting with 23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f not found: ID does not exist" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.405967 4931 scope.go:117] "RemoveContainer" containerID="03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8" Jan 31 04:46:00 crc kubenswrapper[4931]: E0131 04:46:00.406418 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8\": container with ID starting with 03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8 not found: ID does not exist" containerID="03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.406465 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8"} err="failed to get container status \"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8\": rpc error: code = NotFound desc = could not find container \"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8\": container with ID starting with 03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8 not found: ID does not exist" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.406483 4931 scope.go:117] "RemoveContainer" containerID="69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98" Jan 31 04:46:00 crc kubenswrapper[4931]: E0131 04:46:00.407128 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98\": container with ID starting with 69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98 not found: ID does not exist" containerID="69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.407154 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98"} err="failed to get container status \"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98\": rpc error: code = NotFound desc = could not find container \"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98\": container with ID starting with 69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98 not found: ID does not exist" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.407173 4931 scope.go:117] "RemoveContainer" containerID="23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.407545 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f"} err="failed to get container status \"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f\": rpc error: code = NotFound desc = could not find container \"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f\": container with ID starting with 23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f not found: ID does not exist" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.407565 4931 scope.go:117] "RemoveContainer" containerID="03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.407780 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8"} err="failed to get container status \"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8\": rpc error: code = NotFound desc = could not find container \"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8\": container with ID starting with 03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8 not found: ID does not exist" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.407798 4931 scope.go:117] "RemoveContainer" containerID="69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.407997 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98"} err="failed to get container status \"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98\": rpc error: code = NotFound desc = could not find container \"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98\": container with ID starting with 69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98 not found: ID does not exist" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.409982 4931 scope.go:117] "RemoveContainer" containerID="23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.410666 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f"} err="failed to get container status \"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f\": rpc error: code = NotFound desc = could not find container \"23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f\": container with ID starting with 23e2f54d2774147822c942383e66e1e1240ebda8eb83798d73e37acb852d3e7f not found: ID does not exist" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.410691 4931 scope.go:117] "RemoveContainer" containerID="03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.411002 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8"} err="failed to get container status \"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8\": rpc error: code = NotFound desc = could not find container \"03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8\": container with ID starting with 03e8763dfe7515a01e34f6ebe05c7c7dfba2fe52f2092477e6f5661575879dc8 not found: ID does not exist" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.411041 4931 scope.go:117] "RemoveContainer" containerID="69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.411309 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98"} err="failed to get container status \"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98\": rpc error: code = NotFound desc = could not find container \"69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98\": container with ID starting with 69d71139f2125e2d85cdc364af69df787e1d3546ca3bdaf0d4e6174b8d881b98 not found: ID does not exist" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474409 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474468 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af30e89b-382b-4005-8a4e-e48573c7913a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474496 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474517 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474545 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-run\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474585 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474615 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af30e89b-382b-4005-8a4e-e48573c7913a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474654 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474683 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af30e89b-382b-4005-8a4e-e48573c7913a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474705 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af30e89b-382b-4005-8a4e-e48573c7913a-logs\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474785 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-sys\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474820 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-dev\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474874 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.474912 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grshj\" (UniqueName: \"kubernetes.io/projected/af30e89b-382b-4005-8a4e-e48573c7913a-kube-api-access-grshj\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.576951 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af30e89b-382b-4005-8a4e-e48573c7913a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.576995 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577016 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577034 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-run\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577069 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577086 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577130 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-run\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577170 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577095 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af30e89b-382b-4005-8a4e-e48573c7913a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577352 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577378 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577418 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577448 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af30e89b-382b-4005-8a4e-e48573c7913a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577466 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af30e89b-382b-4005-8a4e-e48573c7913a-logs\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577485 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-sys\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577511 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-dev\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577540 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577570 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grshj\" (UniqueName: \"kubernetes.io/projected/af30e89b-382b-4005-8a4e-e48573c7913a-kube-api-access-grshj\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577614 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577739 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577779 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-dev\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577817 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-sys\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577835 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af30e89b-382b-4005-8a4e-e48573c7913a-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.577958 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af30e89b-382b-4005-8a4e-e48573c7913a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.579202 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af30e89b-382b-4005-8a4e-e48573c7913a-logs\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.580790 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af30e89b-382b-4005-8a4e-e48573c7913a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.604825 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grshj\" (UniqueName: \"kubernetes.io/projected/af30e89b-382b-4005-8a4e-e48573c7913a-kube-api-access-grshj\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.605080 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af30e89b-382b-4005-8a4e-e48573c7913a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.608913 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.618664 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"af30e89b-382b-4005-8a4e-e48573c7913a\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:00 crc kubenswrapper[4931]: I0131 04:46:00.692434 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:01 crc kubenswrapper[4931]: I0131 04:46:01.166392 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 04:46:01 crc kubenswrapper[4931]: W0131 04:46:01.179570 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf30e89b_382b_4005_8a4e_e48573c7913a.slice/crio-a9c506295a70213ef3c550f1884f733aa4490a65d7cfea88eb8cbb9b04b5004b WatchSource:0}: Error finding container a9c506295a70213ef3c550f1884f733aa4490a65d7cfea88eb8cbb9b04b5004b: Status 404 returned error can't find the container with id a9c506295a70213ef3c550f1884f733aa4490a65d7cfea88eb8cbb9b04b5004b Jan 31 04:46:01 crc kubenswrapper[4931]: I0131 04:46:01.303194 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"af30e89b-382b-4005-8a4e-e48573c7913a","Type":"ContainerStarted","Data":"a9c506295a70213ef3c550f1884f733aa4490a65d7cfea88eb8cbb9b04b5004b"} Jan 31 04:46:01 crc kubenswrapper[4931]: I0131 04:46:01.908696 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca34ce7-bc38-4ea4-9131-132809cb355b" path="/var/lib/kubelet/pods/6ca34ce7-bc38-4ea4-9131-132809cb355b/volumes" Jan 31 04:46:02 crc kubenswrapper[4931]: I0131 04:46:02.315605 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"af30e89b-382b-4005-8a4e-e48573c7913a","Type":"ContainerStarted","Data":"4c39a0f4b139662ff78057b3c95bc82a91bae625723c4cb3f02ca22cd6416295"} Jan 31 04:46:02 crc kubenswrapper[4931]: I0131 04:46:02.315664 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"af30e89b-382b-4005-8a4e-e48573c7913a","Type":"ContainerStarted","Data":"167bbab10cd7a6c21b56fc76d0b416a22da4be597c0d29d5e25ab6d67d7030a5"} Jan 31 04:46:02 crc kubenswrapper[4931]: I0131 04:46:02.315687 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"af30e89b-382b-4005-8a4e-e48573c7913a","Type":"ContainerStarted","Data":"7824d1f718d2824fe2703b99a8b3cc37680f8e85a8d085c0e5268cbbaa606a96"} Jan 31 04:46:02 crc kubenswrapper[4931]: I0131 04:46:02.348359 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.34834118 podStartE2EDuration="2.34834118s" podCreationTimestamp="2026-01-31 04:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:46:02.342303657 +0000 UTC m=+1321.151532551" watchObservedRunningTime="2026-01-31 04:46:02.34834118 +0000 UTC m=+1321.157570054" Jan 31 04:46:07 crc kubenswrapper[4931]: I0131 04:46:07.745424 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:07 crc kubenswrapper[4931]: I0131 04:46:07.747403 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:07 crc kubenswrapper[4931]: I0131 04:46:07.747684 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:07 crc kubenswrapper[4931]: I0131 04:46:07.774401 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:07 crc kubenswrapper[4931]: I0131 04:46:07.790055 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:07 crc kubenswrapper[4931]: I0131 04:46:07.790193 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:08 crc kubenswrapper[4931]: I0131 04:46:08.363517 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:08 crc kubenswrapper[4931]: I0131 04:46:08.363737 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:08 crc kubenswrapper[4931]: I0131 04:46:08.363802 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:08 crc kubenswrapper[4931]: I0131 04:46:08.378521 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:08 crc kubenswrapper[4931]: I0131 04:46:08.380198 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:08 crc kubenswrapper[4931]: I0131 04:46:08.394195 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 04:46:10 crc kubenswrapper[4931]: I0131 04:46:10.692683 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:10 crc kubenswrapper[4931]: I0131 04:46:10.694647 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:10 crc kubenswrapper[4931]: I0131 04:46:10.694662 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:10 crc kubenswrapper[4931]: I0131 04:46:10.715811 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:10 crc kubenswrapper[4931]: I0131 04:46:10.724813 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:10 crc kubenswrapper[4931]: I0131 04:46:10.738590 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:11 crc kubenswrapper[4931]: I0131 04:46:11.388050 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:11 crc kubenswrapper[4931]: I0131 04:46:11.388091 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:11 crc kubenswrapper[4931]: I0131 04:46:11.388103 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:11 crc kubenswrapper[4931]: I0131 04:46:11.401629 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:11 crc kubenswrapper[4931]: I0131 04:46:11.404071 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:11 crc kubenswrapper[4931]: I0131 04:46:11.411997 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 04:46:21 crc kubenswrapper[4931]: I0131 04:46:21.133520 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:46:21 crc kubenswrapper[4931]: I0131 04:46:21.134302 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:46:51 crc kubenswrapper[4931]: I0131 04:46:51.133188 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:46:51 crc kubenswrapper[4931]: I0131 04:46:51.133659 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:46:51 crc kubenswrapper[4931]: I0131 04:46:51.133769 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:46:51 crc kubenswrapper[4931]: I0131 04:46:51.135011 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9190250862ec2bef6098d6e5f251719973f7af0324c479608c8122c95778b27e"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:46:51 crc kubenswrapper[4931]: I0131 04:46:51.135114 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://9190250862ec2bef6098d6e5f251719973f7af0324c479608c8122c95778b27e" gracePeriod=600 Jan 31 04:46:51 crc kubenswrapper[4931]: I0131 04:46:51.730617 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="9190250862ec2bef6098d6e5f251719973f7af0324c479608c8122c95778b27e" exitCode=0 Jan 31 04:46:51 crc kubenswrapper[4931]: I0131 04:46:51.730776 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"9190250862ec2bef6098d6e5f251719973f7af0324c479608c8122c95778b27e"} Jan 31 04:46:51 crc kubenswrapper[4931]: I0131 04:46:51.731367 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4"} Jan 31 04:46:51 crc kubenswrapper[4931]: I0131 04:46:51.731452 4931 scope.go:117] "RemoveContainer" containerID="f6fdd63c2992141cc392f9894235afa4f2697a4d0af5fdfafac1d9c21aba8ff3" Jan 31 04:48:51 crc kubenswrapper[4931]: I0131 04:48:51.133138 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:48:51 crc kubenswrapper[4931]: I0131 04:48:51.133950 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.139127 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jkgxm"] Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.141525 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.167972 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jkgxm"] Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.229994 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-catalog-content\") pod \"redhat-operators-jkgxm\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.230083 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57tb4\" (UniqueName: \"kubernetes.io/projected/a5f4f9e2-c9a0-4045-97e1-8875627beda5-kube-api-access-57tb4\") pod \"redhat-operators-jkgxm\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.230150 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-utilities\") pod \"redhat-operators-jkgxm\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.331887 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-catalog-content\") pod \"redhat-operators-jkgxm\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.331942 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57tb4\" (UniqueName: \"kubernetes.io/projected/a5f4f9e2-c9a0-4045-97e1-8875627beda5-kube-api-access-57tb4\") pod \"redhat-operators-jkgxm\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.331987 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-utilities\") pod \"redhat-operators-jkgxm\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.332472 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-catalog-content\") pod \"redhat-operators-jkgxm\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.332534 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-utilities\") pod \"redhat-operators-jkgxm\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.356048 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57tb4\" (UniqueName: \"kubernetes.io/projected/a5f4f9e2-c9a0-4045-97e1-8875627beda5-kube-api-access-57tb4\") pod \"redhat-operators-jkgxm\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.468274 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.723753 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wlngz"] Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.727244 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.739864 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlngz"] Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.839384 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-utilities\") pod \"redhat-marketplace-wlngz\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.839623 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvr98\" (UniqueName: \"kubernetes.io/projected/1247cbcf-00e1-44ef-a00d-116fd5592d59-kube-api-access-nvr98\") pod \"redhat-marketplace-wlngz\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.839682 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-catalog-content\") pod \"redhat-marketplace-wlngz\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.940830 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-utilities\") pod \"redhat-marketplace-wlngz\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.940943 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvr98\" (UniqueName: \"kubernetes.io/projected/1247cbcf-00e1-44ef-a00d-116fd5592d59-kube-api-access-nvr98\") pod \"redhat-marketplace-wlngz\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.940969 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-catalog-content\") pod \"redhat-marketplace-wlngz\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.941594 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-catalog-content\") pod \"redhat-marketplace-wlngz\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.941618 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-utilities\") pod \"redhat-marketplace-wlngz\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.963238 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvr98\" (UniqueName: \"kubernetes.io/projected/1247cbcf-00e1-44ef-a00d-116fd5592d59-kube-api-access-nvr98\") pod \"redhat-marketplace-wlngz\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:48:59 crc kubenswrapper[4931]: I0131 04:48:59.972695 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jkgxm"] Jan 31 04:49:00 crc kubenswrapper[4931]: I0131 04:49:00.010645 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkgxm" event={"ID":"a5f4f9e2-c9a0-4045-97e1-8875627beda5","Type":"ContainerStarted","Data":"c14253166f2163c0aa6b7809fffcd0f1404a24159453dfeb74f70720659c3f70"} Jan 31 04:49:00 crc kubenswrapper[4931]: I0131 04:49:00.056453 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:49:00 crc kubenswrapper[4931]: I0131 04:49:00.298607 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlngz"] Jan 31 04:49:01 crc kubenswrapper[4931]: I0131 04:49:01.018095 4931 generic.go:334] "Generic (PLEG): container finished" podID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerID="7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902" exitCode=0 Jan 31 04:49:01 crc kubenswrapper[4931]: I0131 04:49:01.018150 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkgxm" event={"ID":"a5f4f9e2-c9a0-4045-97e1-8875627beda5","Type":"ContainerDied","Data":"7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902"} Jan 31 04:49:01 crc kubenswrapper[4931]: I0131 04:49:01.019766 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:49:01 crc kubenswrapper[4931]: I0131 04:49:01.022012 4931 generic.go:334] "Generic (PLEG): container finished" podID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerID="6f75f77af677dd335e6aef90a81d136e066f324fc2547f132a9163092854d17a" exitCode=0 Jan 31 04:49:01 crc kubenswrapper[4931]: I0131 04:49:01.022045 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlngz" event={"ID":"1247cbcf-00e1-44ef-a00d-116fd5592d59","Type":"ContainerDied","Data":"6f75f77af677dd335e6aef90a81d136e066f324fc2547f132a9163092854d17a"} Jan 31 04:49:01 crc kubenswrapper[4931]: I0131 04:49:01.022069 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlngz" event={"ID":"1247cbcf-00e1-44ef-a00d-116fd5592d59","Type":"ContainerStarted","Data":"a7e1139a8e94857ddffd0f2cd80d525fe9f30a6402337c801f89e99bddbaba2e"} Jan 31 04:49:02 crc kubenswrapper[4931]: I0131 04:49:02.033234 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkgxm" event={"ID":"a5f4f9e2-c9a0-4045-97e1-8875627beda5","Type":"ContainerStarted","Data":"81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9"} Jan 31 04:49:02 crc kubenswrapper[4931]: I0131 04:49:02.036399 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlngz" event={"ID":"1247cbcf-00e1-44ef-a00d-116fd5592d59","Type":"ContainerStarted","Data":"13f6bdd790cc39c636c4ef0684222db29f3466fc22442c43d1423a0163d07419"} Jan 31 04:49:03 crc kubenswrapper[4931]: I0131 04:49:03.044085 4931 generic.go:334] "Generic (PLEG): container finished" podID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerID="81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9" exitCode=0 Jan 31 04:49:03 crc kubenswrapper[4931]: I0131 04:49:03.044508 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkgxm" event={"ID":"a5f4f9e2-c9a0-4045-97e1-8875627beda5","Type":"ContainerDied","Data":"81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9"} Jan 31 04:49:03 crc kubenswrapper[4931]: I0131 04:49:03.053468 4931 generic.go:334] "Generic (PLEG): container finished" podID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerID="13f6bdd790cc39c636c4ef0684222db29f3466fc22442c43d1423a0163d07419" exitCode=0 Jan 31 04:49:03 crc kubenswrapper[4931]: I0131 04:49:03.053512 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlngz" event={"ID":"1247cbcf-00e1-44ef-a00d-116fd5592d59","Type":"ContainerDied","Data":"13f6bdd790cc39c636c4ef0684222db29f3466fc22442c43d1423a0163d07419"} Jan 31 04:49:04 crc kubenswrapper[4931]: I0131 04:49:04.079318 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlngz" event={"ID":"1247cbcf-00e1-44ef-a00d-116fd5592d59","Type":"ContainerStarted","Data":"d73d4f0cbce9cb01376801828cdcb68ef87336ab6adb532240f18834bae81547"} Jan 31 04:49:04 crc kubenswrapper[4931]: I0131 04:49:04.109139 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wlngz" podStartSLOduration=2.452316149 podStartE2EDuration="5.10912115s" podCreationTimestamp="2026-01-31 04:48:59 +0000 UTC" firstStartedPulling="2026-01-31 04:49:01.023115675 +0000 UTC m=+1499.832344549" lastFinishedPulling="2026-01-31 04:49:03.679920676 +0000 UTC m=+1502.489149550" observedRunningTime="2026-01-31 04:49:04.099645827 +0000 UTC m=+1502.908874711" watchObservedRunningTime="2026-01-31 04:49:04.10912115 +0000 UTC m=+1502.918350024" Jan 31 04:49:05 crc kubenswrapper[4931]: I0131 04:49:05.092888 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkgxm" event={"ID":"a5f4f9e2-c9a0-4045-97e1-8875627beda5","Type":"ContainerStarted","Data":"a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf"} Jan 31 04:49:05 crc kubenswrapper[4931]: I0131 04:49:05.113580 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jkgxm" podStartSLOduration=3.23051874 podStartE2EDuration="6.113556516s" podCreationTimestamp="2026-01-31 04:48:59 +0000 UTC" firstStartedPulling="2026-01-31 04:49:01.019570646 +0000 UTC m=+1499.828799520" lastFinishedPulling="2026-01-31 04:49:03.902608422 +0000 UTC m=+1502.711837296" observedRunningTime="2026-01-31 04:49:05.110866241 +0000 UTC m=+1503.920095135" watchObservedRunningTime="2026-01-31 04:49:05.113556516 +0000 UTC m=+1503.922785390" Jan 31 04:49:09 crc kubenswrapper[4931]: I0131 04:49:09.468803 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:49:09 crc kubenswrapper[4931]: I0131 04:49:09.469269 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:49:10 crc kubenswrapper[4931]: I0131 04:49:10.057744 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:49:10 crc kubenswrapper[4931]: I0131 04:49:10.059069 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:49:10 crc kubenswrapper[4931]: I0131 04:49:10.114691 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:49:10 crc kubenswrapper[4931]: I0131 04:49:10.173904 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:49:10 crc kubenswrapper[4931]: I0131 04:49:10.349487 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlngz"] Jan 31 04:49:10 crc kubenswrapper[4931]: I0131 04:49:10.514171 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jkgxm" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerName="registry-server" probeResult="failure" output=< Jan 31 04:49:10 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 31 04:49:10 crc kubenswrapper[4931]: > Jan 31 04:49:12 crc kubenswrapper[4931]: I0131 04:49:12.150802 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wlngz" podUID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerName="registry-server" containerID="cri-o://d73d4f0cbce9cb01376801828cdcb68ef87336ab6adb532240f18834bae81547" gracePeriod=2 Jan 31 04:49:13 crc kubenswrapper[4931]: I0131 04:49:13.166049 4931 generic.go:334] "Generic (PLEG): container finished" podID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerID="d73d4f0cbce9cb01376801828cdcb68ef87336ab6adb532240f18834bae81547" exitCode=0 Jan 31 04:49:13 crc kubenswrapper[4931]: I0131 04:49:13.166129 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlngz" event={"ID":"1247cbcf-00e1-44ef-a00d-116fd5592d59","Type":"ContainerDied","Data":"d73d4f0cbce9cb01376801828cdcb68ef87336ab6adb532240f18834bae81547"} Jan 31 04:49:13 crc kubenswrapper[4931]: I0131 04:49:13.777405 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:49:13 crc kubenswrapper[4931]: I0131 04:49:13.946825 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvr98\" (UniqueName: \"kubernetes.io/projected/1247cbcf-00e1-44ef-a00d-116fd5592d59-kube-api-access-nvr98\") pod \"1247cbcf-00e1-44ef-a00d-116fd5592d59\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " Jan 31 04:49:13 crc kubenswrapper[4931]: I0131 04:49:13.946926 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-utilities\") pod \"1247cbcf-00e1-44ef-a00d-116fd5592d59\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " Jan 31 04:49:13 crc kubenswrapper[4931]: I0131 04:49:13.947062 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-catalog-content\") pod \"1247cbcf-00e1-44ef-a00d-116fd5592d59\" (UID: \"1247cbcf-00e1-44ef-a00d-116fd5592d59\") " Jan 31 04:49:13 crc kubenswrapper[4931]: I0131 04:49:13.948984 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-utilities" (OuterVolumeSpecName: "utilities") pod "1247cbcf-00e1-44ef-a00d-116fd5592d59" (UID: "1247cbcf-00e1-44ef-a00d-116fd5592d59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:13 crc kubenswrapper[4931]: I0131 04:49:13.955921 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1247cbcf-00e1-44ef-a00d-116fd5592d59-kube-api-access-nvr98" (OuterVolumeSpecName: "kube-api-access-nvr98") pod "1247cbcf-00e1-44ef-a00d-116fd5592d59" (UID: "1247cbcf-00e1-44ef-a00d-116fd5592d59"). InnerVolumeSpecName "kube-api-access-nvr98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:13 crc kubenswrapper[4931]: I0131 04:49:13.981947 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1247cbcf-00e1-44ef-a00d-116fd5592d59" (UID: "1247cbcf-00e1-44ef-a00d-116fd5592d59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.048624 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.048678 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvr98\" (UniqueName: \"kubernetes.io/projected/1247cbcf-00e1-44ef-a00d-116fd5592d59-kube-api-access-nvr98\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.048694 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1247cbcf-00e1-44ef-a00d-116fd5592d59-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.175446 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlngz" event={"ID":"1247cbcf-00e1-44ef-a00d-116fd5592d59","Type":"ContainerDied","Data":"a7e1139a8e94857ddffd0f2cd80d525fe9f30a6402337c801f89e99bddbaba2e"} Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.175507 4931 scope.go:117] "RemoveContainer" containerID="d73d4f0cbce9cb01376801828cdcb68ef87336ab6adb532240f18834bae81547" Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.175514 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlngz" Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.198117 4931 scope.go:117] "RemoveContainer" containerID="13f6bdd790cc39c636c4ef0684222db29f3466fc22442c43d1423a0163d07419" Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.212011 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlngz"] Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.221766 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlngz"] Jan 31 04:49:14 crc kubenswrapper[4931]: I0131 04:49:14.229727 4931 scope.go:117] "RemoveContainer" containerID="6f75f77af677dd335e6aef90a81d136e066f324fc2547f132a9163092854d17a" Jan 31 04:49:15 crc kubenswrapper[4931]: I0131 04:49:15.905865 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1247cbcf-00e1-44ef-a00d-116fd5592d59" path="/var/lib/kubelet/pods/1247cbcf-00e1-44ef-a00d-116fd5592d59/volumes" Jan 31 04:49:19 crc kubenswrapper[4931]: I0131 04:49:19.508935 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:49:19 crc kubenswrapper[4931]: I0131 04:49:19.552612 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:49:19 crc kubenswrapper[4931]: I0131 04:49:19.741637 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jkgxm"] Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.136907 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.137340 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.236232 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jkgxm" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerName="registry-server" containerID="cri-o://a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf" gracePeriod=2 Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.689262 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.761741 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57tb4\" (UniqueName: \"kubernetes.io/projected/a5f4f9e2-c9a0-4045-97e1-8875627beda5-kube-api-access-57tb4\") pod \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.761889 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-catalog-content\") pod \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.762045 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-utilities\") pod \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\" (UID: \"a5f4f9e2-c9a0-4045-97e1-8875627beda5\") " Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.764001 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-utilities" (OuterVolumeSpecName: "utilities") pod "a5f4f9e2-c9a0-4045-97e1-8875627beda5" (UID: "a5f4f9e2-c9a0-4045-97e1-8875627beda5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.767447 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f4f9e2-c9a0-4045-97e1-8875627beda5-kube-api-access-57tb4" (OuterVolumeSpecName: "kube-api-access-57tb4") pod "a5f4f9e2-c9a0-4045-97e1-8875627beda5" (UID: "a5f4f9e2-c9a0-4045-97e1-8875627beda5"). InnerVolumeSpecName "kube-api-access-57tb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.864349 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.864392 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57tb4\" (UniqueName: \"kubernetes.io/projected/a5f4f9e2-c9a0-4045-97e1-8875627beda5-kube-api-access-57tb4\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.879260 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5f4f9e2-c9a0-4045-97e1-8875627beda5" (UID: "a5f4f9e2-c9a0-4045-97e1-8875627beda5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:21 crc kubenswrapper[4931]: I0131 04:49:21.966368 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f4f9e2-c9a0-4045-97e1-8875627beda5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.246546 4931 generic.go:334] "Generic (PLEG): container finished" podID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerID="a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf" exitCode=0 Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.246594 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkgxm" event={"ID":"a5f4f9e2-c9a0-4045-97e1-8875627beda5","Type":"ContainerDied","Data":"a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf"} Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.246624 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkgxm" event={"ID":"a5f4f9e2-c9a0-4045-97e1-8875627beda5","Type":"ContainerDied","Data":"c14253166f2163c0aa6b7809fffcd0f1404a24159453dfeb74f70720659c3f70"} Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.246645 4931 scope.go:117] "RemoveContainer" containerID="a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.246811 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkgxm" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.270333 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jkgxm"] Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.274288 4931 scope.go:117] "RemoveContainer" containerID="81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.275478 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jkgxm"] Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.290803 4931 scope.go:117] "RemoveContainer" containerID="7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.317289 4931 scope.go:117] "RemoveContainer" containerID="a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf" Jan 31 04:49:22 crc kubenswrapper[4931]: E0131 04:49:22.317834 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf\": container with ID starting with a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf not found: ID does not exist" containerID="a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.317881 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf"} err="failed to get container status \"a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf\": rpc error: code = NotFound desc = could not find container \"a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf\": container with ID starting with a303bcf87396a31fbf234e47216dcd44da49e7219f64ad537dc408f964406ccf not found: ID does not exist" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.317910 4931 scope.go:117] "RemoveContainer" containerID="81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9" Jan 31 04:49:22 crc kubenswrapper[4931]: E0131 04:49:22.318265 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9\": container with ID starting with 81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9 not found: ID does not exist" containerID="81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.318305 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9"} err="failed to get container status \"81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9\": rpc error: code = NotFound desc = could not find container \"81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9\": container with ID starting with 81ee43349b6f6a02eac0da3f4832d332b11e0ffe2d7b73858292439813b85ab9 not found: ID does not exist" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.318334 4931 scope.go:117] "RemoveContainer" containerID="7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902" Jan 31 04:49:22 crc kubenswrapper[4931]: E0131 04:49:22.318573 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902\": container with ID starting with 7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902 not found: ID does not exist" containerID="7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902" Jan 31 04:49:22 crc kubenswrapper[4931]: I0131 04:49:22.318605 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902"} err="failed to get container status \"7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902\": rpc error: code = NotFound desc = could not find container \"7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902\": container with ID starting with 7b4646c438bed8a6ad2454accc34a5fe274855a4924497dd2a6f4ebc01516902 not found: ID does not exist" Jan 31 04:49:23 crc kubenswrapper[4931]: I0131 04:49:23.905986 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" path="/var/lib/kubelet/pods/a5f4f9e2-c9a0-4045-97e1-8875627beda5/volumes" Jan 31 04:49:51 crc kubenswrapper[4931]: I0131 04:49:51.133856 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:49:51 crc kubenswrapper[4931]: I0131 04:49:51.134429 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:49:51 crc kubenswrapper[4931]: I0131 04:49:51.134473 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:49:51 crc kubenswrapper[4931]: I0131 04:49:51.135161 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:49:51 crc kubenswrapper[4931]: I0131 04:49:51.135232 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" gracePeriod=600 Jan 31 04:49:51 crc kubenswrapper[4931]: E0131 04:49:51.274572 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:49:51 crc kubenswrapper[4931]: I0131 04:49:51.515064 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" exitCode=0 Jan 31 04:49:51 crc kubenswrapper[4931]: I0131 04:49:51.515106 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4"} Jan 31 04:49:51 crc kubenswrapper[4931]: I0131 04:49:51.515141 4931 scope.go:117] "RemoveContainer" containerID="9190250862ec2bef6098d6e5f251719973f7af0324c479608c8122c95778b27e" Jan 31 04:49:51 crc kubenswrapper[4931]: I0131 04:49:51.515609 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:49:51 crc kubenswrapper[4931]: E0131 04:49:51.515876 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:50:02 crc kubenswrapper[4931]: I0131 04:50:02.951125 4931 scope.go:117] "RemoveContainer" containerID="a92e1254a118dba3fe07ecdbb36184ae4544e69d1535bcd7c7f4ed6b4c4329a1" Jan 31 04:50:02 crc kubenswrapper[4931]: I0131 04:50:02.970133 4931 scope.go:117] "RemoveContainer" containerID="365ca5691b44da07dd10e84670168714df6f5f8e3d528f6e700fd346892466f5" Jan 31 04:50:03 crc kubenswrapper[4931]: I0131 04:50:03.010456 4931 scope.go:117] "RemoveContainer" containerID="efe2cf3b47432667022ab2a6937e20421c61b70597aaf688099c7ab50f2cbe70" Jan 31 04:50:04 crc kubenswrapper[4931]: I0131 04:50:04.898399 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:50:04 crc kubenswrapper[4931]: E0131 04:50:04.899176 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:50:19 crc kubenswrapper[4931]: I0131 04:50:19.897916 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:50:19 crc kubenswrapper[4931]: E0131 04:50:19.899099 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:50:32 crc kubenswrapper[4931]: I0131 04:50:32.896997 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:50:32 crc kubenswrapper[4931]: E0131 04:50:32.897815 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:50:45 crc kubenswrapper[4931]: I0131 04:50:45.897173 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:50:45 crc kubenswrapper[4931]: E0131 04:50:45.898263 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:51:00 crc kubenswrapper[4931]: I0131 04:51:00.897581 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:51:00 crc kubenswrapper[4931]: E0131 04:51:00.900228 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:51:03 crc kubenswrapper[4931]: I0131 04:51:03.141478 4931 scope.go:117] "RemoveContainer" containerID="3b68d38a57d469446d6b3d17f5e7cff25f00348a4bd9a22046aa05bc5d595838" Jan 31 04:51:03 crc kubenswrapper[4931]: I0131 04:51:03.183318 4931 scope.go:117] "RemoveContainer" containerID="c425c7fbddde1c3f4899c9276e2b00c0fb2f34d0719d0c79ced412cd89793d23" Jan 31 04:51:03 crc kubenswrapper[4931]: I0131 04:51:03.211051 4931 scope.go:117] "RemoveContainer" containerID="e4d9ee8593730318929f21d5d62d7f648f5045f95409da12f6ccbbe6cd4d200b" Jan 31 04:51:03 crc kubenswrapper[4931]: I0131 04:51:03.234195 4931 scope.go:117] "RemoveContainer" containerID="be01478f3b9f61c49a7dae1bf4f533f855667a7dc402a6747f221ccebb23301b" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.813809 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-htgn5"] Jan 31 04:51:06 crc kubenswrapper[4931]: E0131 04:51:06.814918 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerName="extract-content" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.814953 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerName="extract-content" Jan 31 04:51:06 crc kubenswrapper[4931]: E0131 04:51:06.815023 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerName="extract-utilities" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.815044 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerName="extract-utilities" Jan 31 04:51:06 crc kubenswrapper[4931]: E0131 04:51:06.815096 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerName="registry-server" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.815116 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerName="registry-server" Jan 31 04:51:06 crc kubenswrapper[4931]: E0131 04:51:06.815184 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerName="extract-utilities" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.815203 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerName="extract-utilities" Jan 31 04:51:06 crc kubenswrapper[4931]: E0131 04:51:06.815243 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerName="extract-content" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.815262 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerName="extract-content" Jan 31 04:51:06 crc kubenswrapper[4931]: E0131 04:51:06.815304 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerName="registry-server" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.815323 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerName="registry-server" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.815764 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f4f9e2-c9a0-4045-97e1-8875627beda5" containerName="registry-server" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.815801 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1247cbcf-00e1-44ef-a00d-116fd5592d59" containerName="registry-server" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.818404 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.821761 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htgn5"] Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.966414 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-catalog-content\") pod \"community-operators-htgn5\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.966670 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-utilities\") pod \"community-operators-htgn5\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:06 crc kubenswrapper[4931]: I0131 04:51:06.966745 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x624s\" (UniqueName: \"kubernetes.io/projected/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-kube-api-access-x624s\") pod \"community-operators-htgn5\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:07 crc kubenswrapper[4931]: I0131 04:51:07.068051 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-utilities\") pod \"community-operators-htgn5\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:07 crc kubenswrapper[4931]: I0131 04:51:07.068112 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x624s\" (UniqueName: \"kubernetes.io/projected/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-kube-api-access-x624s\") pod \"community-operators-htgn5\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:07 crc kubenswrapper[4931]: I0131 04:51:07.068206 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-catalog-content\") pod \"community-operators-htgn5\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:07 crc kubenswrapper[4931]: I0131 04:51:07.068919 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-utilities\") pod \"community-operators-htgn5\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:07 crc kubenswrapper[4931]: I0131 04:51:07.068936 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-catalog-content\") pod \"community-operators-htgn5\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:07 crc kubenswrapper[4931]: I0131 04:51:07.090610 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x624s\" (UniqueName: \"kubernetes.io/projected/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-kube-api-access-x624s\") pod \"community-operators-htgn5\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:07 crc kubenswrapper[4931]: I0131 04:51:07.158517 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:07 crc kubenswrapper[4931]: I0131 04:51:07.661916 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htgn5"] Jan 31 04:51:08 crc kubenswrapper[4931]: I0131 04:51:08.200004 4931 generic.go:334] "Generic (PLEG): container finished" podID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerID="02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27" exitCode=0 Jan 31 04:51:08 crc kubenswrapper[4931]: I0131 04:51:08.200074 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htgn5" event={"ID":"1b7be3bf-9681-44cb-9c21-58bd7a534dd4","Type":"ContainerDied","Data":"02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27"} Jan 31 04:51:08 crc kubenswrapper[4931]: I0131 04:51:08.203827 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htgn5" event={"ID":"1b7be3bf-9681-44cb-9c21-58bd7a534dd4","Type":"ContainerStarted","Data":"2e98b5d5bd13d8f505dc5bcbbfe61e40cb8ea6e1e2708fe52e28d03cc9ffb85c"} Jan 31 04:51:09 crc kubenswrapper[4931]: I0131 04:51:09.211182 4931 generic.go:334] "Generic (PLEG): container finished" podID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerID="e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8" exitCode=0 Jan 31 04:51:09 crc kubenswrapper[4931]: I0131 04:51:09.211259 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htgn5" event={"ID":"1b7be3bf-9681-44cb-9c21-58bd7a534dd4","Type":"ContainerDied","Data":"e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8"} Jan 31 04:51:10 crc kubenswrapper[4931]: I0131 04:51:10.223996 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htgn5" event={"ID":"1b7be3bf-9681-44cb-9c21-58bd7a534dd4","Type":"ContainerStarted","Data":"9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276"} Jan 31 04:51:10 crc kubenswrapper[4931]: I0131 04:51:10.246706 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-htgn5" podStartSLOduration=2.7303848950000003 podStartE2EDuration="4.246676831s" podCreationTimestamp="2026-01-31 04:51:06 +0000 UTC" firstStartedPulling="2026-01-31 04:51:08.201697764 +0000 UTC m=+1627.010926638" lastFinishedPulling="2026-01-31 04:51:09.71798968 +0000 UTC m=+1628.527218574" observedRunningTime="2026-01-31 04:51:10.243796041 +0000 UTC m=+1629.053024945" watchObservedRunningTime="2026-01-31 04:51:10.246676831 +0000 UTC m=+1629.055905745" Jan 31 04:51:15 crc kubenswrapper[4931]: I0131 04:51:15.897443 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:51:15 crc kubenswrapper[4931]: E0131 04:51:15.899398 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:51:17 crc kubenswrapper[4931]: I0131 04:51:17.159440 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:17 crc kubenswrapper[4931]: I0131 04:51:17.159588 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:17 crc kubenswrapper[4931]: I0131 04:51:17.240207 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:17 crc kubenswrapper[4931]: I0131 04:51:17.328407 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:17 crc kubenswrapper[4931]: I0131 04:51:17.481872 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htgn5"] Jan 31 04:51:19 crc kubenswrapper[4931]: I0131 04:51:19.055068 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-create-kh978"] Jan 31 04:51:19 crc kubenswrapper[4931]: I0131 04:51:19.062121 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-create-kh978"] Jan 31 04:51:19 crc kubenswrapper[4931]: I0131 04:51:19.291608 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-htgn5" podUID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerName="registry-server" containerID="cri-o://9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276" gracePeriod=2 Jan 31 04:51:19 crc kubenswrapper[4931]: I0131 04:51:19.905778 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b088d08-99db-4f24-af21-ac85849692c5" path="/var/lib/kubelet/pods/1b088d08-99db-4f24-af21-ac85849692c5/volumes" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.186572 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.270371 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-catalog-content\") pod \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.270556 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-utilities\") pod \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.270581 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x624s\" (UniqueName: \"kubernetes.io/projected/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-kube-api-access-x624s\") pod \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\" (UID: \"1b7be3bf-9681-44cb-9c21-58bd7a534dd4\") " Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.273353 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-utilities" (OuterVolumeSpecName: "utilities") pod "1b7be3bf-9681-44cb-9c21-58bd7a534dd4" (UID: "1b7be3bf-9681-44cb-9c21-58bd7a534dd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.281966 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-kube-api-access-x624s" (OuterVolumeSpecName: "kube-api-access-x624s") pod "1b7be3bf-9681-44cb-9c21-58bd7a534dd4" (UID: "1b7be3bf-9681-44cb-9c21-58bd7a534dd4"). InnerVolumeSpecName "kube-api-access-x624s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.306484 4931 generic.go:334] "Generic (PLEG): container finished" podID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerID="9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276" exitCode=0 Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.306539 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htgn5" event={"ID":"1b7be3bf-9681-44cb-9c21-58bd7a534dd4","Type":"ContainerDied","Data":"9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276"} Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.306571 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htgn5" event={"ID":"1b7be3bf-9681-44cb-9c21-58bd7a534dd4","Type":"ContainerDied","Data":"2e98b5d5bd13d8f505dc5bcbbfe61e40cb8ea6e1e2708fe52e28d03cc9ffb85c"} Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.306594 4931 scope.go:117] "RemoveContainer" containerID="9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.306773 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htgn5" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.325768 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b7be3bf-9681-44cb-9c21-58bd7a534dd4" (UID: "1b7be3bf-9681-44cb-9c21-58bd7a534dd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.335351 4931 scope.go:117] "RemoveContainer" containerID="e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.358546 4931 scope.go:117] "RemoveContainer" containerID="02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.386901 4931 scope.go:117] "RemoveContainer" containerID="9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.388109 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.388141 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x624s\" (UniqueName: \"kubernetes.io/projected/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-kube-api-access-x624s\") on node \"crc\" DevicePath \"\"" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.388157 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7be3bf-9681-44cb-9c21-58bd7a534dd4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:51:20 crc kubenswrapper[4931]: E0131 04:51:20.393869 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276\": container with ID starting with 9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276 not found: ID does not exist" containerID="9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.393925 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276"} err="failed to get container status \"9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276\": rpc error: code = NotFound desc = could not find container \"9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276\": container with ID starting with 9a58dfe34b40ff73afd03032100fea7aecaa50da52f734437cf22eac42c92276 not found: ID does not exist" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.393956 4931 scope.go:117] "RemoveContainer" containerID="e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8" Jan 31 04:51:20 crc kubenswrapper[4931]: E0131 04:51:20.397857 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8\": container with ID starting with e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8 not found: ID does not exist" containerID="e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.397898 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8"} err="failed to get container status \"e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8\": rpc error: code = NotFound desc = could not find container \"e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8\": container with ID starting with e6948594fefbe2aaeeaeb034e826d17ecafb0c33322af13aa469e4c6474514c8 not found: ID does not exist" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.397936 4931 scope.go:117] "RemoveContainer" containerID="02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27" Jan 31 04:51:20 crc kubenswrapper[4931]: E0131 04:51:20.398551 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27\": container with ID starting with 02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27 not found: ID does not exist" containerID="02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.398579 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27"} err="failed to get container status \"02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27\": rpc error: code = NotFound desc = could not find container \"02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27\": container with ID starting with 02abd7d3ef8c4da8937804944a42b2e6c1e519fce26a596e4c80e5005de40a27 not found: ID does not exist" Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.644434 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htgn5"] Jan 31 04:51:20 crc kubenswrapper[4931]: I0131 04:51:20.649963 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-htgn5"] Jan 31 04:51:21 crc kubenswrapper[4931]: I0131 04:51:21.905497 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" path="/var/lib/kubelet/pods/1b7be3bf-9681-44cb-9c21-58bd7a534dd4/volumes" Jan 31 04:51:29 crc kubenswrapper[4931]: I0131 04:51:29.030712 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-0bae-account-create-xmhsn"] Jan 31 04:51:29 crc kubenswrapper[4931]: I0131 04:51:29.042641 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-0bae-account-create-xmhsn"] Jan 31 04:51:29 crc kubenswrapper[4931]: I0131 04:51:29.910384 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc26d7e-6d49-45a0-bad1-540d998798fb" path="/var/lib/kubelet/pods/bcc26d7e-6d49-45a0-bad1-540d998798fb/volumes" Jan 31 04:51:30 crc kubenswrapper[4931]: I0131 04:51:30.898023 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:51:30 crc kubenswrapper[4931]: E0131 04:51:30.900151 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:51:44 crc kubenswrapper[4931]: I0131 04:51:44.896937 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:51:44 crc kubenswrapper[4931]: E0131 04:51:44.897679 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:51:45 crc kubenswrapper[4931]: I0131 04:51:45.051590 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-hlp86"] Jan 31 04:51:45 crc kubenswrapper[4931]: I0131 04:51:45.058028 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-hlp86"] Jan 31 04:51:45 crc kubenswrapper[4931]: I0131 04:51:45.905267 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5122d57d-31ab-4312-a0d7-32c11806847f" path="/var/lib/kubelet/pods/5122d57d-31ab-4312-a0d7-32c11806847f/volumes" Jan 31 04:51:52 crc kubenswrapper[4931]: I0131 04:51:52.030801 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-59mt8"] Jan 31 04:51:52 crc kubenswrapper[4931]: I0131 04:51:52.040830 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-59mt8"] Jan 31 04:51:53 crc kubenswrapper[4931]: I0131 04:51:53.905254 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a49df57-04e5-4967-bba4-f797623943f3" path="/var/lib/kubelet/pods/4a49df57-04e5-4967-bba4-f797623943f3/volumes" Jan 31 04:51:58 crc kubenswrapper[4931]: I0131 04:51:58.896633 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:51:58 crc kubenswrapper[4931]: E0131 04:51:58.897189 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:52:03 crc kubenswrapper[4931]: I0131 04:52:03.308289 4931 scope.go:117] "RemoveContainer" containerID="b88df7809d6800761b70ff0f89f02ab9a612490737b24f719f851567b819c225" Jan 31 04:52:03 crc kubenswrapper[4931]: I0131 04:52:03.350046 4931 scope.go:117] "RemoveContainer" containerID="7ff587e7d4b5b26acacc69b2e25286a7f18280a51aab704de20d260734969ee4" Jan 31 04:52:03 crc kubenswrapper[4931]: I0131 04:52:03.369903 4931 scope.go:117] "RemoveContainer" containerID="a5a5ef56818419e412ef08b28e6a2d5c4791167bf0d9f38620af8119ac8bf8f3" Jan 31 04:52:03 crc kubenswrapper[4931]: I0131 04:52:03.411294 4931 scope.go:117] "RemoveContainer" containerID="bb2f39882fe27348f8c28f27de103569e8e7de56edcc737c2f912dffbfabfe43" Jan 31 04:52:03 crc kubenswrapper[4931]: I0131 04:52:03.459575 4931 scope.go:117] "RemoveContainer" containerID="6da3826094312cc2aa4f8e736c661bbf33071c21dd327e35bb962273bca0ea44" Jan 31 04:52:11 crc kubenswrapper[4931]: I0131 04:52:11.903489 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:52:11 crc kubenswrapper[4931]: E0131 04:52:11.904222 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.111336 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 04:52:18 crc kubenswrapper[4931]: E0131 04:52:18.112211 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerName="registry-server" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.112227 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerName="registry-server" Jan 31 04:52:18 crc kubenswrapper[4931]: E0131 04:52:18.112281 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerName="extract-utilities" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.112289 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerName="extract-utilities" Jan 31 04:52:18 crc kubenswrapper[4931]: E0131 04:52:18.112311 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerName="extract-content" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.112320 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerName="extract-content" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.112477 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7be3bf-9681-44cb-9c21-58bd7a534dd4" containerName="registry-server" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.113049 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.115372 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.115488 4931 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-qp85r" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.115648 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.120208 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.124851 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.207288 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7e7eb33-452c-4662-ba80-73ff64ec73fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.207372 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcc29\" (UniqueName: \"kubernetes.io/projected/f7e7eb33-452c-4662-ba80-73ff64ec73fd-kube-api-access-vcc29\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.207427 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e7eb33-452c-4662-ba80-73ff64ec73fd-openstack-scripts\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.207449 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7e7eb33-452c-4662-ba80-73ff64ec73fd-openstack-config\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.308953 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e7eb33-452c-4662-ba80-73ff64ec73fd-openstack-scripts\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.309010 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7e7eb33-452c-4662-ba80-73ff64ec73fd-openstack-config\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.309074 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7e7eb33-452c-4662-ba80-73ff64ec73fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.309111 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcc29\" (UniqueName: \"kubernetes.io/projected/f7e7eb33-452c-4662-ba80-73ff64ec73fd-kube-api-access-vcc29\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.309967 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7e7eb33-452c-4662-ba80-73ff64ec73fd-openstack-config\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.310646 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e7eb33-452c-4662-ba80-73ff64ec73fd-openstack-scripts\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.316137 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7e7eb33-452c-4662-ba80-73ff64ec73fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.328120 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcc29\" (UniqueName: \"kubernetes.io/projected/f7e7eb33-452c-4662-ba80-73ff64ec73fd-kube-api-access-vcc29\") pod \"openstackclient\" (UID: \"f7e7eb33-452c-4662-ba80-73ff64ec73fd\") " pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.441476 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 04:52:18 crc kubenswrapper[4931]: I0131 04:52:18.906159 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 04:52:19 crc kubenswrapper[4931]: I0131 04:52:19.773482 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"f7e7eb33-452c-4662-ba80-73ff64ec73fd","Type":"ContainerStarted","Data":"995b8cb06bd97c2feb10306f040c2e4daeea78eae89079a0240deda1f82e14ef"} Jan 31 04:52:19 crc kubenswrapper[4931]: I0131 04:52:19.773764 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"f7e7eb33-452c-4662-ba80-73ff64ec73fd","Type":"ContainerStarted","Data":"eead15723a00adf611d85df7a18e3814be2a7c5831e5fb21830cb3c6810fa3d5"} Jan 31 04:52:19 crc kubenswrapper[4931]: I0131 04:52:19.788634 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=1.788618861 podStartE2EDuration="1.788618861s" podCreationTimestamp="2026-01-31 04:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:52:19.785452263 +0000 UTC m=+1698.594681157" watchObservedRunningTime="2026-01-31 04:52:19.788618861 +0000 UTC m=+1698.597847735" Jan 31 04:52:24 crc kubenswrapper[4931]: I0131 04:52:24.896771 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:52:24 crc kubenswrapper[4931]: E0131 04:52:24.898047 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:52:38 crc kubenswrapper[4931]: I0131 04:52:38.896416 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:52:38 crc kubenswrapper[4931]: E0131 04:52:38.897240 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:52:49 crc kubenswrapper[4931]: I0131 04:52:49.897000 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:52:49 crc kubenswrapper[4931]: E0131 04:52:49.897968 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:53:04 crc kubenswrapper[4931]: I0131 04:53:04.897255 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:53:04 crc kubenswrapper[4931]: E0131 04:53:04.898245 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:53:18 crc kubenswrapper[4931]: I0131 04:53:18.896703 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:53:18 crc kubenswrapper[4931]: E0131 04:53:18.897516 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:53:30 crc kubenswrapper[4931]: I0131 04:53:30.896219 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:53:30 crc kubenswrapper[4931]: E0131 04:53:30.896749 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.171656 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n7gp8"] Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.173817 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.189701 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7gp8"] Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.363804 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-catalog-content\") pod \"certified-operators-n7gp8\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.364102 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchfc\" (UniqueName: \"kubernetes.io/projected/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-kube-api-access-zchfc\") pod \"certified-operators-n7gp8\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.364163 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-utilities\") pod \"certified-operators-n7gp8\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.465018 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-utilities\") pod \"certified-operators-n7gp8\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.465089 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-catalog-content\") pod \"certified-operators-n7gp8\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.465132 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchfc\" (UniqueName: \"kubernetes.io/projected/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-kube-api-access-zchfc\") pod \"certified-operators-n7gp8\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.465495 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-utilities\") pod \"certified-operators-n7gp8\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.465957 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-catalog-content\") pod \"certified-operators-n7gp8\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.489169 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchfc\" (UniqueName: \"kubernetes.io/projected/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-kube-api-access-zchfc\") pod \"certified-operators-n7gp8\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.549439 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:36 crc kubenswrapper[4931]: I0131 04:53:36.816656 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7gp8"] Jan 31 04:53:36 crc kubenswrapper[4931]: W0131 04:53:36.822229 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833bc5d6_e376_4d6f_a7bf_96c3eff9217b.slice/crio-e497d831aebf5d9cf9752886c90418970f37cfd3ba3e76e66a0ee80792f7e0d0 WatchSource:0}: Error finding container e497d831aebf5d9cf9752886c90418970f37cfd3ba3e76e66a0ee80792f7e0d0: Status 404 returned error can't find the container with id e497d831aebf5d9cf9752886c90418970f37cfd3ba3e76e66a0ee80792f7e0d0 Jan 31 04:53:37 crc kubenswrapper[4931]: I0131 04:53:37.603711 4931 generic.go:334] "Generic (PLEG): container finished" podID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerID="d342ff0171a991a07d390c3199a1b64867450b6f4e774d3d565f12f764f6f351" exitCode=0 Jan 31 04:53:37 crc kubenswrapper[4931]: I0131 04:53:37.603892 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7gp8" event={"ID":"833bc5d6-e376-4d6f-a7bf-96c3eff9217b","Type":"ContainerDied","Data":"d342ff0171a991a07d390c3199a1b64867450b6f4e774d3d565f12f764f6f351"} Jan 31 04:53:37 crc kubenswrapper[4931]: I0131 04:53:37.604498 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7gp8" event={"ID":"833bc5d6-e376-4d6f-a7bf-96c3eff9217b","Type":"ContainerStarted","Data":"e497d831aebf5d9cf9752886c90418970f37cfd3ba3e76e66a0ee80792f7e0d0"} Jan 31 04:53:39 crc kubenswrapper[4931]: I0131 04:53:39.623757 4931 generic.go:334] "Generic (PLEG): container finished" podID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerID="c21c986a918d68c069f123fdc7e399263dffd9944313fc98e1c143b6d3402231" exitCode=0 Jan 31 04:53:39 crc kubenswrapper[4931]: I0131 04:53:39.623875 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7gp8" event={"ID":"833bc5d6-e376-4d6f-a7bf-96c3eff9217b","Type":"ContainerDied","Data":"c21c986a918d68c069f123fdc7e399263dffd9944313fc98e1c143b6d3402231"} Jan 31 04:53:40 crc kubenswrapper[4931]: I0131 04:53:40.639556 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7gp8" event={"ID":"833bc5d6-e376-4d6f-a7bf-96c3eff9217b","Type":"ContainerStarted","Data":"468f575ed2c639d65a23d3a9adc2b5945fe2272981c8ceb889ec5bb5c64eb3e5"} Jan 31 04:53:40 crc kubenswrapper[4931]: I0131 04:53:40.659225 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n7gp8" podStartSLOduration=2.001946936 podStartE2EDuration="4.65920605s" podCreationTimestamp="2026-01-31 04:53:36 +0000 UTC" firstStartedPulling="2026-01-31 04:53:37.607243444 +0000 UTC m=+1776.416472328" lastFinishedPulling="2026-01-31 04:53:40.264502568 +0000 UTC m=+1779.073731442" observedRunningTime="2026-01-31 04:53:40.653604753 +0000 UTC m=+1779.462833627" watchObservedRunningTime="2026-01-31 04:53:40.65920605 +0000 UTC m=+1779.468434924" Jan 31 04:53:45 crc kubenswrapper[4931]: I0131 04:53:45.897109 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:53:45 crc kubenswrapper[4931]: E0131 04:53:45.897955 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.407755 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xm64z/must-gather-jr2d5"] Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.409672 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.415955 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5718f5a4-8b72-4717-aa57-18c93869939e-must-gather-output\") pod \"must-gather-jr2d5\" (UID: \"5718f5a4-8b72-4717-aa57-18c93869939e\") " pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.416067 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76zlj\" (UniqueName: \"kubernetes.io/projected/5718f5a4-8b72-4717-aa57-18c93869939e-kube-api-access-76zlj\") pod \"must-gather-jr2d5\" (UID: \"5718f5a4-8b72-4717-aa57-18c93869939e\") " pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.418425 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xm64z"/"openshift-service-ca.crt" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.418772 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xm64z"/"kube-root-ca.crt" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.432360 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xm64z/must-gather-jr2d5"] Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.526403 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76zlj\" (UniqueName: \"kubernetes.io/projected/5718f5a4-8b72-4717-aa57-18c93869939e-kube-api-access-76zlj\") pod \"must-gather-jr2d5\" (UID: \"5718f5a4-8b72-4717-aa57-18c93869939e\") " pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.526732 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5718f5a4-8b72-4717-aa57-18c93869939e-must-gather-output\") pod \"must-gather-jr2d5\" (UID: \"5718f5a4-8b72-4717-aa57-18c93869939e\") " pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.527121 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5718f5a4-8b72-4717-aa57-18c93869939e-must-gather-output\") pod \"must-gather-jr2d5\" (UID: \"5718f5a4-8b72-4717-aa57-18c93869939e\") " pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.550054 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.550912 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.554750 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76zlj\" (UniqueName: \"kubernetes.io/projected/5718f5a4-8b72-4717-aa57-18c93869939e-kube-api-access-76zlj\") pod \"must-gather-jr2d5\" (UID: \"5718f5a4-8b72-4717-aa57-18c93869939e\") " pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.645507 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.733589 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:53:46 crc kubenswrapper[4931]: I0131 04:53:46.768986 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:47 crc kubenswrapper[4931]: I0131 04:53:47.129404 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xm64z/must-gather-jr2d5"] Jan 31 04:53:47 crc kubenswrapper[4931]: I0131 04:53:47.721257 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm64z/must-gather-jr2d5" event={"ID":"5718f5a4-8b72-4717-aa57-18c93869939e","Type":"ContainerStarted","Data":"b1f8dbdf97b49ce7826d35fbfe414f5b0c970b8d6b100aefe92c2a06d5837c46"} Jan 31 04:53:50 crc kubenswrapper[4931]: I0131 04:53:50.153788 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7gp8"] Jan 31 04:53:50 crc kubenswrapper[4931]: I0131 04:53:50.154259 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n7gp8" podUID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerName="registry-server" containerID="cri-o://468f575ed2c639d65a23d3a9adc2b5945fe2272981c8ceb889ec5bb5c64eb3e5" gracePeriod=2 Jan 31 04:53:50 crc kubenswrapper[4931]: I0131 04:53:50.749766 4931 generic.go:334] "Generic (PLEG): container finished" podID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerID="468f575ed2c639d65a23d3a9adc2b5945fe2272981c8ceb889ec5bb5c64eb3e5" exitCode=0 Jan 31 04:53:50 crc kubenswrapper[4931]: I0131 04:53:50.749806 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7gp8" event={"ID":"833bc5d6-e376-4d6f-a7bf-96c3eff9217b","Type":"ContainerDied","Data":"468f575ed2c639d65a23d3a9adc2b5945fe2272981c8ceb889ec5bb5c64eb3e5"} Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.063593 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.198629 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-catalog-content\") pod \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.198701 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-utilities\") pod \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.198759 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zchfc\" (UniqueName: \"kubernetes.io/projected/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-kube-api-access-zchfc\") pod \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\" (UID: \"833bc5d6-e376-4d6f-a7bf-96c3eff9217b\") " Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.201413 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-utilities" (OuterVolumeSpecName: "utilities") pod "833bc5d6-e376-4d6f-a7bf-96c3eff9217b" (UID: "833bc5d6-e376-4d6f-a7bf-96c3eff9217b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.204692 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-kube-api-access-zchfc" (OuterVolumeSpecName: "kube-api-access-zchfc") pod "833bc5d6-e376-4d6f-a7bf-96c3eff9217b" (UID: "833bc5d6-e376-4d6f-a7bf-96c3eff9217b"). InnerVolumeSpecName "kube-api-access-zchfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.258577 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "833bc5d6-e376-4d6f-a7bf-96c3eff9217b" (UID: "833bc5d6-e376-4d6f-a7bf-96c3eff9217b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.300847 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.300911 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.300926 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zchfc\" (UniqueName: \"kubernetes.io/projected/833bc5d6-e376-4d6f-a7bf-96c3eff9217b-kube-api-access-zchfc\") on node \"crc\" DevicePath \"\"" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.757937 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm64z/must-gather-jr2d5" event={"ID":"5718f5a4-8b72-4717-aa57-18c93869939e","Type":"ContainerStarted","Data":"341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd"} Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.758263 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm64z/must-gather-jr2d5" event={"ID":"5718f5a4-8b72-4717-aa57-18c93869939e","Type":"ContainerStarted","Data":"27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2"} Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.763177 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7gp8" event={"ID":"833bc5d6-e376-4d6f-a7bf-96c3eff9217b","Type":"ContainerDied","Data":"e497d831aebf5d9cf9752886c90418970f37cfd3ba3e76e66a0ee80792f7e0d0"} Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.763234 4931 scope.go:117] "RemoveContainer" containerID="468f575ed2c639d65a23d3a9adc2b5945fe2272981c8ceb889ec5bb5c64eb3e5" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.763397 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7gp8" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.807647 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xm64z/must-gather-jr2d5" podStartSLOduration=2.148836555 podStartE2EDuration="5.807631155s" podCreationTimestamp="2026-01-31 04:53:46 +0000 UTC" firstStartedPulling="2026-01-31 04:53:47.159092867 +0000 UTC m=+1785.968321741" lastFinishedPulling="2026-01-31 04:53:50.817887457 +0000 UTC m=+1789.627116341" observedRunningTime="2026-01-31 04:53:51.776684511 +0000 UTC m=+1790.585913385" watchObservedRunningTime="2026-01-31 04:53:51.807631155 +0000 UTC m=+1790.616860039" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.814499 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7gp8"] Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.815412 4931 scope.go:117] "RemoveContainer" containerID="c21c986a918d68c069f123fdc7e399263dffd9944313fc98e1c143b6d3402231" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.822789 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n7gp8"] Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.839416 4931 scope.go:117] "RemoveContainer" containerID="d342ff0171a991a07d390c3199a1b64867450b6f4e774d3d565f12f764f6f351" Jan 31 04:53:51 crc kubenswrapper[4931]: I0131 04:53:51.911392 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" path="/var/lib/kubelet/pods/833bc5d6-e376-4d6f-a7bf-96c3eff9217b/volumes" Jan 31 04:53:56 crc kubenswrapper[4931]: I0131 04:53:56.897914 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:53:56 crc kubenswrapper[4931]: E0131 04:53:56.899033 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:54:11 crc kubenswrapper[4931]: I0131 04:54:11.900168 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:54:11 crc kubenswrapper[4931]: E0131 04:54:11.900889 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:54:23 crc kubenswrapper[4931]: I0131 04:54:23.897237 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:54:23 crc kubenswrapper[4931]: E0131 04:54:23.897964 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:54:28 crc kubenswrapper[4931]: I0131 04:54:28.565390 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m_3e9000bc-caf4-4e29-9b8f-8d59434c0e3b/util/0.log" Jan 31 04:54:28 crc kubenswrapper[4931]: I0131 04:54:28.800015 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m_3e9000bc-caf4-4e29-9b8f-8d59434c0e3b/util/0.log" Jan 31 04:54:28 crc kubenswrapper[4931]: I0131 04:54:28.813625 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m_3e9000bc-caf4-4e29-9b8f-8d59434c0e3b/pull/0.log" Jan 31 04:54:28 crc kubenswrapper[4931]: I0131 04:54:28.849613 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m_3e9000bc-caf4-4e29-9b8f-8d59434c0e3b/pull/0.log" Jan 31 04:54:28 crc kubenswrapper[4931]: I0131 04:54:28.970243 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m_3e9000bc-caf4-4e29-9b8f-8d59434c0e3b/util/0.log" Jan 31 04:54:28 crc kubenswrapper[4931]: I0131 04:54:28.993010 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m_3e9000bc-caf4-4e29-9b8f-8d59434c0e3b/pull/0.log" Jan 31 04:54:29 crc kubenswrapper[4931]: I0131 04:54:29.015471 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_08fd67c2704bd579d69037cd31a247ca5611822858b4dfd342a9b187b7thh6m_3e9000bc-caf4-4e29-9b8f-8d59434c0e3b/extract/0.log" Jan 31 04:54:29 crc kubenswrapper[4931]: I0131 04:54:29.297010 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l_2cb9b4e4-99a7-469f-b4f7-9cd46b023602/util/0.log" Jan 31 04:54:29 crc kubenswrapper[4931]: I0131 04:54:29.506741 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l_2cb9b4e4-99a7-469f-b4f7-9cd46b023602/util/0.log" Jan 31 04:54:29 crc kubenswrapper[4931]: I0131 04:54:29.511201 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l_2cb9b4e4-99a7-469f-b4f7-9cd46b023602/pull/0.log" Jan 31 04:54:29 crc kubenswrapper[4931]: I0131 04:54:29.519135 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l_2cb9b4e4-99a7-469f-b4f7-9cd46b023602/pull/0.log" Jan 31 04:54:29 crc kubenswrapper[4931]: I0131 04:54:29.684073 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l_2cb9b4e4-99a7-469f-b4f7-9cd46b023602/util/0.log" Jan 31 04:54:29 crc kubenswrapper[4931]: I0131 04:54:29.717200 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l_2cb9b4e4-99a7-469f-b4f7-9cd46b023602/extract/0.log" Jan 31 04:54:29 crc kubenswrapper[4931]: I0131 04:54:29.723736 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_676b439cb92f9f44554843f050413ed1e37cf652cd359df0fdc2f4aeefv2s8l_2cb9b4e4-99a7-469f-b4f7-9cd46b023602/pull/0.log" Jan 31 04:54:29 crc kubenswrapper[4931]: I0131 04:54:29.850738 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p_5149516f-c8ae-4644-af21-9ad3dd0e6bb3/util/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.011583 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p_5149516f-c8ae-4644-af21-9ad3dd0e6bb3/util/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.064502 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p_5149516f-c8ae-4644-af21-9ad3dd0e6bb3/pull/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.072299 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p_5149516f-c8ae-4644-af21-9ad3dd0e6bb3/pull/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.202807 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p_5149516f-c8ae-4644-af21-9ad3dd0e6bb3/util/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.243157 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p_5149516f-c8ae-4644-af21-9ad3dd0e6bb3/extract/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.274331 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590vcp4p_5149516f-c8ae-4644-af21-9ad3dd0e6bb3/pull/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.373297 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w_b97c9738-62e4-4623-8974-f8625930a8a5/util/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.567936 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w_b97c9738-62e4-4623-8974-f8625930a8a5/pull/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.574117 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w_b97c9738-62e4-4623-8974-f8625930a8a5/util/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.580819 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w_b97c9738-62e4-4623-8974-f8625930a8a5/pull/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.698237 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w_b97c9738-62e4-4623-8974-f8625930a8a5/util/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.749226 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w_b97c9738-62e4-4623-8974-f8625930a8a5/extract/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.754601 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c82adc99704864fa3117c6cd007ac9b91adfb3c1fe3947cc1fcd81da67lwl7w_b97c9738-62e4-4623-8974-f8625930a8a5/pull/0.log" Jan 31 04:54:30 crc kubenswrapper[4931]: I0131 04:54:30.885785 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f_4446c1a5-9e5b-4b6f-8e57-336a4960e408/util/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.026773 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f_4446c1a5-9e5b-4b6f-8e57-336a4960e408/pull/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.065224 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f_4446c1a5-9e5b-4b6f-8e57-336a4960e408/util/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.065928 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f_4446c1a5-9e5b-4b6f-8e57-336a4960e408/pull/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.214087 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f_4446c1a5-9e5b-4b6f-8e57-336a4960e408/util/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.247843 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f_4446c1a5-9e5b-4b6f-8e57-336a4960e408/extract/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.248092 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1a9c7f176406d46d62243c610ff0c9842d446df45cd633ab29432e823hs66f_4446c1a5-9e5b-4b6f-8e57-336a4960e408/pull/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.441905 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr_d7d13ccf-2b66-4e0f-9ccf-706004dbccaa/util/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.553096 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr_d7d13ccf-2b66-4e0f-9ccf-706004dbccaa/util/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.604907 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr_d7d13ccf-2b66-4e0f-9ccf-706004dbccaa/pull/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.627283 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr_d7d13ccf-2b66-4e0f-9ccf-706004dbccaa/pull/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.800374 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr_d7d13ccf-2b66-4e0f-9ccf-706004dbccaa/util/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.827678 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr_d7d13ccf-2b66-4e0f-9ccf-706004dbccaa/pull/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.836593 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e12ab70c74714c9c6fcdf46fffd6a1432315fddd55631c9fc76cacf069768qr_d7d13ccf-2b66-4e0f-9ccf-706004dbccaa/extract/0.log" Jan 31 04:54:31 crc kubenswrapper[4931]: I0131 04:54:31.913714 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq_eda2f66e-e58a-4521-870a-b05a8cfef2ab/util/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.086120 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq_eda2f66e-e58a-4521-870a-b05a8cfef2ab/util/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.092007 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq_eda2f66e-e58a-4521-870a-b05a8cfef2ab/pull/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.099250 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq_eda2f66e-e58a-4521-870a-b05a8cfef2ab/pull/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.259512 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq_eda2f66e-e58a-4521-870a-b05a8cfef2ab/util/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.272281 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq_eda2f66e-e58a-4521-870a-b05a8cfef2ab/extract/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.281869 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f61da479c3af95ce5c68fff099bafb101fe64c41a2479bef359b4957c4hhqdq_eda2f66e-e58a-4521-870a-b05a8cfef2ab/pull/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.347540 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6b769874f6-2fll5_a4310cc9-8307-46fa-92f5-79c101f3535d/kube-rbac-proxy/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.483297 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-index-pqm86_f332e601-2a6d-46f5-9196-20d3cefa107f/registry-server/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.483820 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6b769874f6-2fll5_a4310cc9-8307-46fa-92f5-79c101f3535d/manager/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.529996 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-69b9d97bb7-8w7m9_f5716144-7d9f-4472-926b-eee4337385fd/kube-rbac-proxy/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.651661 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-69b9d97bb7-8w7m9_f5716144-7d9f-4472-926b-eee4337385fd/manager/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.679448 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-index-kxpz6_6c3b4883-c08d-4a23-96e7-d3bb0d0cfea3/registry-server/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.757639 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-74f8d9cd6d-zrh6j_c1d715be-f984-4ca8-9ac4-c55f0a5add63/kube-rbac-proxy/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.863436 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-74f8d9cd6d-zrh6j_c1d715be-f984-4ca8-9ac4-c55f0a5add63/manager/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.947491 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-index-mnnbm_26afb832-6066-4add-8282-b44b23f796b1/registry-server/0.log" Jan 31 04:54:32 crc kubenswrapper[4931]: I0131 04:54:32.962585 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-78d69b64d-9c79r_9de4c2dc-4248-48ff-9eba-77bb5f41af6e/kube-rbac-proxy/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.061791 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-78d69b64d-9c79r_9de4c2dc-4248-48ff-9eba-77bb5f41af6e/manager/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.106357 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-index-cjqh2_14ff67e1-aa92-4a09-94e4-d96a354498d4/registry-server/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.132052 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cbf8cbfc7-hctgm_0cf3c1af-884b-4ec3-b6db-5b975007174b/kube-rbac-proxy/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.243674 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cbf8cbfc7-hctgm_0cf3c1af-884b-4ec3-b6db-5b975007174b/manager/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.330520 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-index-dd29c_fd79dd5d-c5f9-423b-8e9a-1209131266bc/registry-server/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.357645 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-779fc9694b-z858r_13a3c5de-c53d-4ec9-ad1d-0d7cca9713ca/operator/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.429504 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-index-55xlj_7974a9c9-4c6f-4588-9674-92ab3c2a28ca/registry-server/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.492985 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-f997d59bd-m9rvk_bbadb100-e582-46cd-9460-6bf083b2f53e/kube-rbac-proxy/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.526985 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-f997d59bd-m9rvk_bbadb100-e582-46cd-9460-6bf083b2f53e/manager/0.log" Jan 31 04:54:33 crc kubenswrapper[4931]: I0131 04:54:33.624478 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-index-75cgq_c3b14f91-1228-4221-9b36-288f45301065/registry-server/0.log" Jan 31 04:54:38 crc kubenswrapper[4931]: I0131 04:54:38.898025 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:54:38 crc kubenswrapper[4931]: E0131 04:54:38.898669 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 04:54:47 crc kubenswrapper[4931]: I0131 04:54:47.517108 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qgjvq_16b7b136-dad4-4347-9941-d97a23fa694c/control-plane-machine-set-operator/0.log" Jan 31 04:54:47 crc kubenswrapper[4931]: I0131 04:54:47.611242 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5p2zv_7f411a06-d760-4d52-8939-36856b6813ad/kube-rbac-proxy/0.log" Jan 31 04:54:47 crc kubenswrapper[4931]: I0131 04:54:47.652368 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5p2zv_7f411a06-d760-4d52-8939-36856b6813ad/machine-api-operator/0.log" Jan 31 04:54:52 crc kubenswrapper[4931]: I0131 04:54:52.896237 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:54:53 crc kubenswrapper[4931]: I0131 04:54:53.189966 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"047e2f36082083d6896ae43a3cede568805e5bab324b0207cfddda334c9e723f"} Jan 31 04:55:14 crc kubenswrapper[4931]: I0131 04:55:14.744529 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bndg7_81f751e3-7fea-442f-99b7-70d65ff4e802/kube-rbac-proxy/0.log" Jan 31 04:55:14 crc kubenswrapper[4931]: I0131 04:55:14.842793 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bndg7_81f751e3-7fea-442f-99b7-70d65ff4e802/controller/0.log" Jan 31 04:55:14 crc kubenswrapper[4931]: I0131 04:55:14.943108 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-frr-files/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.117273 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-frr-files/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.132201 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-reloader/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.141697 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-metrics/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.150584 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-reloader/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.327121 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-frr-files/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.379153 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-metrics/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.382294 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-metrics/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.397582 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-reloader/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.548112 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-reloader/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.558251 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/controller/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.583852 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-frr-files/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.583960 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/cp-metrics/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.746639 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/kube-rbac-proxy/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.769965 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/kube-rbac-proxy-frr/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.801120 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/frr-metrics/0.log" Jan 31 04:55:15 crc kubenswrapper[4931]: I0131 04:55:15.959106 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/reloader/0.log" Jan 31 04:55:16 crc kubenswrapper[4931]: I0131 04:55:16.009843 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8gfbc_cec6db71-d107-4ab6-b8ad-138e41b728a8/frr-k8s-webhook-server/0.log" Jan 31 04:55:16 crc kubenswrapper[4931]: I0131 04:55:16.065655 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cftp7_aa28e58b-e4e2-49d0-ac45-fae2643816f7/frr/0.log" Jan 31 04:55:16 crc kubenswrapper[4931]: I0131 04:55:16.216453 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57fb747bf-qc864_bd4db3c6-d696-4339-b0a4-9283164a27f8/manager/0.log" Jan 31 04:55:16 crc kubenswrapper[4931]: I0131 04:55:16.240577 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-87cfd9976-vvr94_fb52765c-4d03-4b55-84e5-e561f54a06bd/webhook-server/0.log" Jan 31 04:55:16 crc kubenswrapper[4931]: I0131 04:55:16.389608 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qxx5h_21aa5a72-184e-4c7f-9f5b-565a457db5a8/kube-rbac-proxy/0.log" Jan 31 04:55:16 crc kubenswrapper[4931]: I0131 04:55:16.497401 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qxx5h_21aa5a72-184e-4c7f-9f5b-565a457db5a8/speaker/0.log" Jan 31 04:55:29 crc kubenswrapper[4931]: I0131 04:55:29.775308 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-2502-account-create-jn4hj_c4368258-0f40-42db-a1ab-c50f30518a4e/mariadb-account-create/0.log" Jan 31 04:55:29 crc kubenswrapper[4931]: I0131 04:55:29.934236 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-db-sync-lf6qp_7d392663-51d0-4a1d-88ef-ade68860d06a/glance-db-sync/0.log" Jan 31 04:55:29 crc kubenswrapper[4931]: I0131 04:55:29.942269 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-db-create-zbsb7_b1df0a92-18a7-4cf4-9577-c94a66b3ca3c/mariadb-database-create/0.log" Jan 31 04:55:30 crc kubenswrapper[4931]: I0131 04:55:30.114369 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-external-api-0_6e5cb183-3a6f-4601-848c-f7af7ff21a9e/glance-api/0.log" Jan 31 04:55:30 crc kubenswrapper[4931]: I0131 04:55:30.160929 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-external-api-0_6e5cb183-3a6f-4601-848c-f7af7ff21a9e/glance-httpd/0.log" Jan 31 04:55:30 crc kubenswrapper[4931]: I0131 04:55:30.184740 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-external-api-0_6e5cb183-3a6f-4601-848c-f7af7ff21a9e/glance-log/0.log" Jan 31 04:55:30 crc kubenswrapper[4931]: I0131 04:55:30.355344 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-internal-api-0_af30e89b-382b-4005-8a4e-e48573c7913a/glance-httpd/0.log" Jan 31 04:55:30 crc kubenswrapper[4931]: I0131 04:55:30.361509 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-internal-api-0_af30e89b-382b-4005-8a4e-e48573c7913a/glance-api/0.log" Jan 31 04:55:30 crc kubenswrapper[4931]: I0131 04:55:30.404729 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-internal-api-0_af30e89b-382b-4005-8a4e-e48573c7913a/glance-log/0.log" Jan 31 04:55:30 crc kubenswrapper[4931]: I0131 04:55:30.765092 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_4c4e50a2-14c9-4128-b467-67e66bd4b0ed/mysql-bootstrap/0.log" Jan 31 04:55:30 crc kubenswrapper[4931]: I0131 04:55:30.812079 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_keystone-856cb9b857-klqt6_be9d3e91-53c4-4d15-aaa5-3ff67279dc3f/keystone-api/0.log" Jan 31 04:55:30 crc kubenswrapper[4931]: I0131 04:55:30.903958 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_4c4e50a2-14c9-4128-b467-67e66bd4b0ed/mysql-bootstrap/0.log" Jan 31 04:55:31 crc kubenswrapper[4931]: I0131 04:55:31.022632 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_4c4e50a2-14c9-4128-b467-67e66bd4b0ed/galera/0.log" Jan 31 04:55:31 crc kubenswrapper[4931]: I0131 04:55:31.114983 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_2ee0bbbf-b9b8-408a-9c09-8b1655718106/mysql-bootstrap/0.log" Jan 31 04:55:31 crc kubenswrapper[4931]: I0131 04:55:31.351021 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_2ee0bbbf-b9b8-408a-9c09-8b1655718106/mysql-bootstrap/0.log" Jan 31 04:55:31 crc kubenswrapper[4931]: I0131 04:55:31.453516 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_2ee0bbbf-b9b8-408a-9c09-8b1655718106/galera/0.log" Jan 31 04:55:31 crc kubenswrapper[4931]: I0131 04:55:31.564958 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_f7186a32-8d8b-433c-b191-86787137c1d1/mysql-bootstrap/0.log" Jan 31 04:55:31 crc kubenswrapper[4931]: I0131 04:55:31.789607 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_f7186a32-8d8b-433c-b191-86787137c1d1/mysql-bootstrap/0.log" Jan 31 04:55:31 crc kubenswrapper[4931]: I0131 04:55:31.812048 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_f7186a32-8d8b-433c-b191-86787137c1d1/galera/0.log" Jan 31 04:55:31 crc kubenswrapper[4931]: I0131 04:55:31.814560 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_memcached-0_6e70e69a-ca63-4885-91ee-92f55b9c3c5c/memcached/0.log" Jan 31 04:55:31 crc kubenswrapper[4931]: I0131 04:55:31.943791 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstackclient_f7e7eb33-452c-4662-ba80-73ff64ec73fd/openstackclient/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.026298 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_3166808f-2786-4207-9cb3-f32437499a16/setup-container/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.215984 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_3166808f-2786-4207-9cb3-f32437499a16/setup-container/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.257238 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_3166808f-2786-4207-9cb3-f32437499a16/rabbitmq/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.263713 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-proxy-5957d6665c-7659x_e4899d9b-4958-48d0-bec1-f13bc66b49a5/proxy-httpd/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.379331 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-proxy-5957d6665c-7659x_e4899d9b-4958-48d0-bec1-f13bc66b49a5/proxy-server/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.445653 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-ring-rebalance-trn87_cfe33f21-0c4a-4efe-a9f5-9cb3b71568c3/swift-ring-rebalance/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.545481 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/account-auditor/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.617155 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/account-replicator/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.654147 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/account-reaper/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.723009 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/account-server/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.745116 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/container-auditor/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.839347 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/container-replicator/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.893856 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/container-server/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.918293 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/container-updater/0.log" Jan 31 04:55:32 crc kubenswrapper[4931]: I0131 04:55:32.918933 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/object-auditor/0.log" Jan 31 04:55:33 crc kubenswrapper[4931]: I0131 04:55:33.025381 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/object-expirer/0.log" Jan 31 04:55:33 crc kubenswrapper[4931]: I0131 04:55:33.055962 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/object-replicator/0.log" Jan 31 04:55:33 crc kubenswrapper[4931]: I0131 04:55:33.111526 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/object-server/0.log" Jan 31 04:55:33 crc kubenswrapper[4931]: I0131 04:55:33.127968 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/object-updater/0.log" Jan 31 04:55:33 crc kubenswrapper[4931]: I0131 04:55:33.218437 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/swift-recon-cron/0.log" Jan 31 04:55:33 crc kubenswrapper[4931]: I0131 04:55:33.232032 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0355d163-55e9-4ac4-8dd4-081e9a637aaf/rsync/0.log" Jan 31 04:55:38 crc kubenswrapper[4931]: I0131 04:55:38.028197 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-zbsb7"] Jan 31 04:55:38 crc kubenswrapper[4931]: I0131 04:55:38.033518 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-zbsb7"] Jan 31 04:55:39 crc kubenswrapper[4931]: I0131 04:55:39.904106 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1df0a92-18a7-4cf4-9577-c94a66b3ca3c" path="/var/lib/kubelet/pods/b1df0a92-18a7-4cf4-9577-c94a66b3ca3c/volumes" Jan 31 04:55:45 crc kubenswrapper[4931]: I0131 04:55:45.704392 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce/util/0.log" Jan 31 04:55:45 crc kubenswrapper[4931]: I0131 04:55:45.866540 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce/pull/0.log" Jan 31 04:55:45 crc kubenswrapper[4931]: I0131 04:55:45.909066 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce/util/0.log" Jan 31 04:55:45 crc kubenswrapper[4931]: I0131 04:55:45.928556 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce/pull/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.042223 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce/util/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.048132 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce/pull/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.109916 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdm769_3b1478e9-7a8e-4195-ba7c-f6f0f8cbbfce/extract/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.206883 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95vhg_c7fca46a-8b1a-4655-8f36-777e9779c57a/extract-utilities/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.374298 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95vhg_c7fca46a-8b1a-4655-8f36-777e9779c57a/extract-utilities/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.375185 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95vhg_c7fca46a-8b1a-4655-8f36-777e9779c57a/extract-content/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.400489 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95vhg_c7fca46a-8b1a-4655-8f36-777e9779c57a/extract-content/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.562731 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95vhg_c7fca46a-8b1a-4655-8f36-777e9779c57a/extract-utilities/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.641817 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95vhg_c7fca46a-8b1a-4655-8f36-777e9779c57a/extract-content/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.797758 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qclkl_3f974c01-9474-4fcd-a478-d9d56a32995b/extract-utilities/0.log" Jan 31 04:55:46 crc kubenswrapper[4931]: I0131 04:55:46.965332 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qclkl_3f974c01-9474-4fcd-a478-d9d56a32995b/extract-utilities/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.032383 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95vhg_c7fca46a-8b1a-4655-8f36-777e9779c57a/registry-server/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.052643 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qclkl_3f974c01-9474-4fcd-a478-d9d56a32995b/extract-content/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.080084 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qclkl_3f974c01-9474-4fcd-a478-d9d56a32995b/extract-content/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.203465 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qclkl_3f974c01-9474-4fcd-a478-d9d56a32995b/extract-content/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.206929 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qclkl_3f974c01-9474-4fcd-a478-d9d56a32995b/extract-utilities/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.478278 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-26slj_fd940fdb-3b83-421e-bdaa-5a238a9bb908/marketplace-operator/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.554719 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-htqgw_7a3d371d-d98f-4f82-a823-b74e23f9ca19/extract-utilities/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.696010 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-htqgw_7a3d371d-d98f-4f82-a823-b74e23f9ca19/extract-utilities/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.732558 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qclkl_3f974c01-9474-4fcd-a478-d9d56a32995b/registry-server/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.753475 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-htqgw_7a3d371d-d98f-4f82-a823-b74e23f9ca19/extract-content/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.755237 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-htqgw_7a3d371d-d98f-4f82-a823-b74e23f9ca19/extract-content/0.log" Jan 31 04:55:47 crc kubenswrapper[4931]: I0131 04:55:47.965891 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-htqgw_7a3d371d-d98f-4f82-a823-b74e23f9ca19/extract-content/0.log" Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.005777 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-htqgw_7a3d371d-d98f-4f82-a823-b74e23f9ca19/extract-utilities/0.log" Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.024592 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-2502-account-create-jn4hj"] Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.030205 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-2502-account-create-jn4hj"] Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.070823 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-htqgw_7a3d371d-d98f-4f82-a823-b74e23f9ca19/registry-server/0.log" Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.152296 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fx9m6_61b4ef96-378e-443d-9eeb-e75e6f181af6/extract-utilities/0.log" Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.302849 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fx9m6_61b4ef96-378e-443d-9eeb-e75e6f181af6/extract-utilities/0.log" Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.307410 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fx9m6_61b4ef96-378e-443d-9eeb-e75e6f181af6/extract-content/0.log" Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.343206 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fx9m6_61b4ef96-378e-443d-9eeb-e75e6f181af6/extract-content/0.log" Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.492980 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fx9m6_61b4ef96-378e-443d-9eeb-e75e6f181af6/extract-content/0.log" Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.497407 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fx9m6_61b4ef96-378e-443d-9eeb-e75e6f181af6/extract-utilities/0.log" Jan 31 04:55:48 crc kubenswrapper[4931]: I0131 04:55:48.907336 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fx9m6_61b4ef96-378e-443d-9eeb-e75e6f181af6/registry-server/0.log" Jan 31 04:55:49 crc kubenswrapper[4931]: I0131 04:55:49.906302 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4368258-0f40-42db-a1ab-c50f30518a4e" path="/var/lib/kubelet/pods/c4368258-0f40-42db-a1ab-c50f30518a4e/volumes" Jan 31 04:55:56 crc kubenswrapper[4931]: I0131 04:55:56.027689 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-lf6qp"] Jan 31 04:55:56 crc kubenswrapper[4931]: I0131 04:55:56.034275 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-lf6qp"] Jan 31 04:55:57 crc kubenswrapper[4931]: I0131 04:55:57.904648 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d392663-51d0-4a1d-88ef-ade68860d06a" path="/var/lib/kubelet/pods/7d392663-51d0-4a1d-88ef-ade68860d06a/volumes" Jan 31 04:56:03 crc kubenswrapper[4931]: I0131 04:56:03.595537 4931 scope.go:117] "RemoveContainer" containerID="687fb00739344b8b2ff681b72260259e5da90ff282675d01548b950d09d667ab" Jan 31 04:56:03 crc kubenswrapper[4931]: I0131 04:56:03.651994 4931 scope.go:117] "RemoveContainer" containerID="daafcd839dd86300149d0cab96231384784a9ced60a536ab559836edf1fd3af0" Jan 31 04:56:03 crc kubenswrapper[4931]: I0131 04:56:03.674387 4931 scope.go:117] "RemoveContainer" containerID="89709f3a55a91267273b5d27d43306d68987b4bc0cdbf1a13c36292caa6af38c" Jan 31 04:57:01 crc kubenswrapper[4931]: I0131 04:57:01.116429 4931 generic.go:334] "Generic (PLEG): container finished" podID="5718f5a4-8b72-4717-aa57-18c93869939e" containerID="27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2" exitCode=0 Jan 31 04:57:01 crc kubenswrapper[4931]: I0131 04:57:01.116522 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm64z/must-gather-jr2d5" event={"ID":"5718f5a4-8b72-4717-aa57-18c93869939e","Type":"ContainerDied","Data":"27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2"} Jan 31 04:57:01 crc kubenswrapper[4931]: I0131 04:57:01.117534 4931 scope.go:117] "RemoveContainer" containerID="27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2" Jan 31 04:57:02 crc kubenswrapper[4931]: I0131 04:57:02.114325 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xm64z_must-gather-jr2d5_5718f5a4-8b72-4717-aa57-18c93869939e/gather/0.log" Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.452227 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xm64z/must-gather-jr2d5"] Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.452886 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xm64z/must-gather-jr2d5" podUID="5718f5a4-8b72-4717-aa57-18c93869939e" containerName="copy" containerID="cri-o://341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd" gracePeriod=2 Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.464388 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xm64z/must-gather-jr2d5"] Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.849629 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xm64z_must-gather-jr2d5_5718f5a4-8b72-4717-aa57-18c93869939e/copy/0.log" Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.850194 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.891485 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76zlj\" (UniqueName: \"kubernetes.io/projected/5718f5a4-8b72-4717-aa57-18c93869939e-kube-api-access-76zlj\") pod \"5718f5a4-8b72-4717-aa57-18c93869939e\" (UID: \"5718f5a4-8b72-4717-aa57-18c93869939e\") " Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.891624 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5718f5a4-8b72-4717-aa57-18c93869939e-must-gather-output\") pod \"5718f5a4-8b72-4717-aa57-18c93869939e\" (UID: \"5718f5a4-8b72-4717-aa57-18c93869939e\") " Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.898529 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5718f5a4-8b72-4717-aa57-18c93869939e-kube-api-access-76zlj" (OuterVolumeSpecName: "kube-api-access-76zlj") pod "5718f5a4-8b72-4717-aa57-18c93869939e" (UID: "5718f5a4-8b72-4717-aa57-18c93869939e"). InnerVolumeSpecName "kube-api-access-76zlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.970525 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5718f5a4-8b72-4717-aa57-18c93869939e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5718f5a4-8b72-4717-aa57-18c93869939e" (UID: "5718f5a4-8b72-4717-aa57-18c93869939e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.993889 4931 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5718f5a4-8b72-4717-aa57-18c93869939e-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 04:57:09 crc kubenswrapper[4931]: I0131 04:57:09.993930 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76zlj\" (UniqueName: \"kubernetes.io/projected/5718f5a4-8b72-4717-aa57-18c93869939e-kube-api-access-76zlj\") on node \"crc\" DevicePath \"\"" Jan 31 04:57:10 crc kubenswrapper[4931]: I0131 04:57:10.192162 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xm64z_must-gather-jr2d5_5718f5a4-8b72-4717-aa57-18c93869939e/copy/0.log" Jan 31 04:57:10 crc kubenswrapper[4931]: I0131 04:57:10.203400 4931 generic.go:334] "Generic (PLEG): container finished" podID="5718f5a4-8b72-4717-aa57-18c93869939e" containerID="341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd" exitCode=143 Jan 31 04:57:10 crc kubenswrapper[4931]: I0131 04:57:10.203511 4931 scope.go:117] "RemoveContainer" containerID="341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd" Jan 31 04:57:10 crc kubenswrapper[4931]: I0131 04:57:10.203576 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm64z/must-gather-jr2d5" Jan 31 04:57:10 crc kubenswrapper[4931]: I0131 04:57:10.230577 4931 scope.go:117] "RemoveContainer" containerID="27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2" Jan 31 04:57:10 crc kubenswrapper[4931]: I0131 04:57:10.277352 4931 scope.go:117] "RemoveContainer" containerID="341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd" Jan 31 04:57:10 crc kubenswrapper[4931]: E0131 04:57:10.277825 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd\": container with ID starting with 341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd not found: ID does not exist" containerID="341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd" Jan 31 04:57:10 crc kubenswrapper[4931]: I0131 04:57:10.277867 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd"} err="failed to get container status \"341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd\": rpc error: code = NotFound desc = could not find container \"341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd\": container with ID starting with 341817c956f8c781e0e81cbc7f7bf36bfad7c739fcf3ba5e490946297d428fdd not found: ID does not exist" Jan 31 04:57:10 crc kubenswrapper[4931]: I0131 04:57:10.277895 4931 scope.go:117] "RemoveContainer" containerID="27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2" Jan 31 04:57:10 crc kubenswrapper[4931]: E0131 04:57:10.278269 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2\": container with ID starting with 27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2 not found: ID does not exist" containerID="27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2" Jan 31 04:57:10 crc kubenswrapper[4931]: I0131 04:57:10.278333 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2"} err="failed to get container status \"27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2\": rpc error: code = NotFound desc = could not find container \"27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2\": container with ID starting with 27244b311be87d9b986ce16049dbbf77bb5f571bc4e5c59696bc12b6df532ab2 not found: ID does not exist" Jan 31 04:57:11 crc kubenswrapper[4931]: I0131 04:57:11.907120 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5718f5a4-8b72-4717-aa57-18c93869939e" path="/var/lib/kubelet/pods/5718f5a4-8b72-4717-aa57-18c93869939e/volumes" Jan 31 04:57:21 crc kubenswrapper[4931]: I0131 04:57:21.134753 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:57:21 crc kubenswrapper[4931]: I0131 04:57:21.135353 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:57:51 crc kubenswrapper[4931]: I0131 04:57:51.133791 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:57:51 crc kubenswrapper[4931]: I0131 04:57:51.134708 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:58:21 crc kubenswrapper[4931]: I0131 04:58:21.133230 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:58:21 crc kubenswrapper[4931]: I0131 04:58:21.133884 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:58:21 crc kubenswrapper[4931]: I0131 04:58:21.133949 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 04:58:21 crc kubenswrapper[4931]: I0131 04:58:21.134796 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"047e2f36082083d6896ae43a3cede568805e5bab324b0207cfddda334c9e723f"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:58:21 crc kubenswrapper[4931]: I0131 04:58:21.134886 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://047e2f36082083d6896ae43a3cede568805e5bab324b0207cfddda334c9e723f" gracePeriod=600 Jan 31 04:58:21 crc kubenswrapper[4931]: I0131 04:58:21.740998 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="047e2f36082083d6896ae43a3cede568805e5bab324b0207cfddda334c9e723f" exitCode=0 Jan 31 04:58:21 crc kubenswrapper[4931]: I0131 04:58:21.741095 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"047e2f36082083d6896ae43a3cede568805e5bab324b0207cfddda334c9e723f"} Jan 31 04:58:21 crc kubenswrapper[4931]: I0131 04:58:21.741636 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerStarted","Data":"9f5f88ecd69f58f7a273fb52ae29b1634585278a7a92f8742797538314c31e33"} Jan 31 04:58:21 crc kubenswrapper[4931]: I0131 04:58:21.741661 4931 scope.go:117] "RemoveContainer" containerID="690b19e7905f0a2187bff09c00483f59bd89eda4e7e17bde9f431e169fd28cf4" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.967631 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5s6k"] Jan 31 04:59:46 crc kubenswrapper[4931]: E0131 04:59:46.969066 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerName="extract-utilities" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.969108 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerName="extract-utilities" Jan 31 04:59:46 crc kubenswrapper[4931]: E0131 04:59:46.969137 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5718f5a4-8b72-4717-aa57-18c93869939e" containerName="gather" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.969157 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5718f5a4-8b72-4717-aa57-18c93869939e" containerName="gather" Jan 31 04:59:46 crc kubenswrapper[4931]: E0131 04:59:46.969190 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerName="registry-server" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.969207 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerName="registry-server" Jan 31 04:59:46 crc kubenswrapper[4931]: E0131 04:59:46.969254 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5718f5a4-8b72-4717-aa57-18c93869939e" containerName="copy" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.969275 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5718f5a4-8b72-4717-aa57-18c93869939e" containerName="copy" Jan 31 04:59:46 crc kubenswrapper[4931]: E0131 04:59:46.969320 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerName="extract-content" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.969339 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerName="extract-content" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.969657 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5718f5a4-8b72-4717-aa57-18c93869939e" containerName="copy" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.969783 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5718f5a4-8b72-4717-aa57-18c93869939e" containerName="gather" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.969826 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="833bc5d6-e376-4d6f-a7bf-96c3eff9217b" containerName="registry-server" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.972260 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:46 crc kubenswrapper[4931]: I0131 04:59:46.997369 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5s6k"] Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.115525 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-utilities\") pod \"redhat-marketplace-s5s6k\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.115846 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-catalog-content\") pod \"redhat-marketplace-s5s6k\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.115912 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgb9x\" (UniqueName: \"kubernetes.io/projected/00fbb90b-254d-45a7-a11c-ee537f597757-kube-api-access-bgb9x\") pod \"redhat-marketplace-s5s6k\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.217690 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-catalog-content\") pod \"redhat-marketplace-s5s6k\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.217763 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgb9x\" (UniqueName: \"kubernetes.io/projected/00fbb90b-254d-45a7-a11c-ee537f597757-kube-api-access-bgb9x\") pod \"redhat-marketplace-s5s6k\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.217837 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-utilities\") pod \"redhat-marketplace-s5s6k\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.218321 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-utilities\") pod \"redhat-marketplace-s5s6k\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.218378 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-catalog-content\") pod \"redhat-marketplace-s5s6k\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.239087 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgb9x\" (UniqueName: \"kubernetes.io/projected/00fbb90b-254d-45a7-a11c-ee537f597757-kube-api-access-bgb9x\") pod \"redhat-marketplace-s5s6k\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.304288 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:47 crc kubenswrapper[4931]: I0131 04:59:47.787287 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5s6k"] Jan 31 04:59:48 crc kubenswrapper[4931]: I0131 04:59:48.442105 4931 generic.go:334] "Generic (PLEG): container finished" podID="00fbb90b-254d-45a7-a11c-ee537f597757" containerID="7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371" exitCode=0 Jan 31 04:59:48 crc kubenswrapper[4931]: I0131 04:59:48.442158 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5s6k" event={"ID":"00fbb90b-254d-45a7-a11c-ee537f597757","Type":"ContainerDied","Data":"7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371"} Jan 31 04:59:48 crc kubenswrapper[4931]: I0131 04:59:48.443320 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5s6k" event={"ID":"00fbb90b-254d-45a7-a11c-ee537f597757","Type":"ContainerStarted","Data":"71cbe7bd073aa741245dba20c320904b80fbdffd89944b7c893486b8a8f4bba5"} Jan 31 04:59:48 crc kubenswrapper[4931]: I0131 04:59:48.445574 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:59:49 crc kubenswrapper[4931]: I0131 04:59:49.452489 4931 generic.go:334] "Generic (PLEG): container finished" podID="00fbb90b-254d-45a7-a11c-ee537f597757" containerID="b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f" exitCode=0 Jan 31 04:59:49 crc kubenswrapper[4931]: I0131 04:59:49.452681 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5s6k" event={"ID":"00fbb90b-254d-45a7-a11c-ee537f597757","Type":"ContainerDied","Data":"b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f"} Jan 31 04:59:50 crc kubenswrapper[4931]: I0131 04:59:50.461437 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5s6k" event={"ID":"00fbb90b-254d-45a7-a11c-ee537f597757","Type":"ContainerStarted","Data":"9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b"} Jan 31 04:59:50 crc kubenswrapper[4931]: I0131 04:59:50.484382 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5s6k" podStartSLOduration=3.072106581 podStartE2EDuration="4.484366909s" podCreationTimestamp="2026-01-31 04:59:46 +0000 UTC" firstStartedPulling="2026-01-31 04:59:48.445138104 +0000 UTC m=+2147.254367018" lastFinishedPulling="2026-01-31 04:59:49.857398452 +0000 UTC m=+2148.666627346" observedRunningTime="2026-01-31 04:59:50.478273878 +0000 UTC m=+2149.287502762" watchObservedRunningTime="2026-01-31 04:59:50.484366909 +0000 UTC m=+2149.293595783" Jan 31 04:59:57 crc kubenswrapper[4931]: I0131 04:59:57.306820 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:57 crc kubenswrapper[4931]: I0131 04:59:57.307438 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:57 crc kubenswrapper[4931]: I0131 04:59:57.350384 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:57 crc kubenswrapper[4931]: I0131 04:59:57.545835 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 04:59:57 crc kubenswrapper[4931]: I0131 04:59:57.580612 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5s6k"] Jan 31 04:59:59 crc kubenswrapper[4931]: I0131 04:59:59.520695 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5s6k" podUID="00fbb90b-254d-45a7-a11c-ee537f597757" containerName="registry-server" containerID="cri-o://9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b" gracePeriod=2 Jan 31 04:59:59 crc kubenswrapper[4931]: I0131 04:59:59.914886 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.023120 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-catalog-content\") pod \"00fbb90b-254d-45a7-a11c-ee537f597757\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.023298 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-utilities\") pod \"00fbb90b-254d-45a7-a11c-ee537f597757\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.023353 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgb9x\" (UniqueName: \"kubernetes.io/projected/00fbb90b-254d-45a7-a11c-ee537f597757-kube-api-access-bgb9x\") pod \"00fbb90b-254d-45a7-a11c-ee537f597757\" (UID: \"00fbb90b-254d-45a7-a11c-ee537f597757\") " Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.025135 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-utilities" (OuterVolumeSpecName: "utilities") pod "00fbb90b-254d-45a7-a11c-ee537f597757" (UID: "00fbb90b-254d-45a7-a11c-ee537f597757"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.029102 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00fbb90b-254d-45a7-a11c-ee537f597757-kube-api-access-bgb9x" (OuterVolumeSpecName: "kube-api-access-bgb9x") pod "00fbb90b-254d-45a7-a11c-ee537f597757" (UID: "00fbb90b-254d-45a7-a11c-ee537f597757"). InnerVolumeSpecName "kube-api-access-bgb9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.044876 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00fbb90b-254d-45a7-a11c-ee537f597757" (UID: "00fbb90b-254d-45a7-a11c-ee537f597757"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.125450 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgb9x\" (UniqueName: \"kubernetes.io/projected/00fbb90b-254d-45a7-a11c-ee537f597757-kube-api-access-bgb9x\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.125489 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.125498 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fbb90b-254d-45a7-a11c-ee537f597757-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.144487 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2"] Jan 31 05:00:00 crc kubenswrapper[4931]: E0131 05:00:00.144977 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00fbb90b-254d-45a7-a11c-ee537f597757" containerName="extract-content" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.144999 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="00fbb90b-254d-45a7-a11c-ee537f597757" containerName="extract-content" Jan 31 05:00:00 crc kubenswrapper[4931]: E0131 05:00:00.145023 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00fbb90b-254d-45a7-a11c-ee537f597757" containerName="registry-server" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.145032 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="00fbb90b-254d-45a7-a11c-ee537f597757" containerName="registry-server" Jan 31 05:00:00 crc kubenswrapper[4931]: E0131 05:00:00.145050 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00fbb90b-254d-45a7-a11c-ee537f597757" containerName="extract-utilities" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.145059 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="00fbb90b-254d-45a7-a11c-ee537f597757" containerName="extract-utilities" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.145244 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="00fbb90b-254d-45a7-a11c-ee537f597757" containerName="registry-server" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.145914 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.153665 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5"] Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.154953 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.160410 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4"] Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.161466 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.165183 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.168091 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2"] Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.176016 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5"] Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.183622 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4"] Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.184541 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.327578 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.327627 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/963ed3a4-0084-484e-8ac2-269f6443f669-config-volume\") pod \"collect-profiles-29497260-xpwt4\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.327662 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/8fef9306-8d52-4de8-b5b5-887f46c7c817-image-cache-config-data\") pod \"glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.327691 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdb7t\" (UniqueName: \"kubernetes.io/projected/8fef9306-8d52-4de8-b5b5-887f46c7c817-kube-api-access-qdb7t\") pod \"glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.327794 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtjv7\" (UniqueName: \"kubernetes.io/projected/963ed3a4-0084-484e-8ac2-269f6443f669-kube-api-access-xtjv7\") pod \"collect-profiles-29497260-xpwt4\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.327834 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/ca286f64-76b9-41f4-8bd6-5daacc4f5895-image-cache-config-data\") pod \"glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.327866 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2zj7\" (UniqueName: \"kubernetes.io/projected/ca286f64-76b9-41f4-8bd6-5daacc4f5895-kube-api-access-r2zj7\") pod \"glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.327903 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.327942 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/963ed3a4-0084-484e-8ac2-269f6443f669-secret-volume\") pod \"collect-profiles-29497260-xpwt4\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.345425 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.347110 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.429603 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdb7t\" (UniqueName: \"kubernetes.io/projected/8fef9306-8d52-4de8-b5b5-887f46c7c817-kube-api-access-qdb7t\") pod \"glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.429701 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtjv7\" (UniqueName: \"kubernetes.io/projected/963ed3a4-0084-484e-8ac2-269f6443f669-kube-api-access-xtjv7\") pod \"collect-profiles-29497260-xpwt4\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.429766 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/ca286f64-76b9-41f4-8bd6-5daacc4f5895-image-cache-config-data\") pod \"glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.429796 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2zj7\" (UniqueName: \"kubernetes.io/projected/ca286f64-76b9-41f4-8bd6-5daacc4f5895-kube-api-access-r2zj7\") pod \"glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.430526 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/963ed3a4-0084-484e-8ac2-269f6443f669-secret-volume\") pod \"collect-profiles-29497260-xpwt4\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.430570 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/963ed3a4-0084-484e-8ac2-269f6443f669-config-volume\") pod \"collect-profiles-29497260-xpwt4\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.430598 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/8fef9306-8d52-4de8-b5b5-887f46c7c817-image-cache-config-data\") pod \"glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.432268 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/963ed3a4-0084-484e-8ac2-269f6443f669-config-volume\") pod \"collect-profiles-29497260-xpwt4\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.433669 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/ca286f64-76b9-41f4-8bd6-5daacc4f5895-image-cache-config-data\") pod \"glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.434270 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/8fef9306-8d52-4de8-b5b5-887f46c7c817-image-cache-config-data\") pod \"glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.434845 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/963ed3a4-0084-484e-8ac2-269f6443f669-secret-volume\") pod \"collect-profiles-29497260-xpwt4\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.447321 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdb7t\" (UniqueName: \"kubernetes.io/projected/8fef9306-8d52-4de8-b5b5-887f46c7c817-kube-api-access-qdb7t\") pod \"glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.447935 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtjv7\" (UniqueName: \"kubernetes.io/projected/963ed3a4-0084-484e-8ac2-269f6443f669-kube-api-access-xtjv7\") pod \"collect-profiles-29497260-xpwt4\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.447963 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2zj7\" (UniqueName: \"kubernetes.io/projected/ca286f64-76b9-41f4-8bd6-5daacc4f5895-kube-api-access-r2zj7\") pod \"glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.470688 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.489412 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.504657 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.532076 4931 generic.go:334] "Generic (PLEG): container finished" podID="00fbb90b-254d-45a7-a11c-ee537f597757" containerID="9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b" exitCode=0 Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.532143 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5s6k" event={"ID":"00fbb90b-254d-45a7-a11c-ee537f597757","Type":"ContainerDied","Data":"9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b"} Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.532176 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5s6k" event={"ID":"00fbb90b-254d-45a7-a11c-ee537f597757","Type":"ContainerDied","Data":"71cbe7bd073aa741245dba20c320904b80fbdffd89944b7c893486b8a8f4bba5"} Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.532224 4931 scope.go:117] "RemoveContainer" containerID="9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.532411 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5s6k" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.586263 4931 scope.go:117] "RemoveContainer" containerID="b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.604499 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5s6k"] Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.623337 4931 scope.go:117] "RemoveContainer" containerID="7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.629770 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5s6k"] Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.641433 4931 scope.go:117] "RemoveContainer" containerID="9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b" Jan 31 05:00:00 crc kubenswrapper[4931]: E0131 05:00:00.642397 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b\": container with ID starting with 9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b not found: ID does not exist" containerID="9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.642424 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b"} err="failed to get container status \"9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b\": rpc error: code = NotFound desc = could not find container \"9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b\": container with ID starting with 9a66d84f8456bd69d19474ebe705d37349f248398cca98db64c71324f6b7657b not found: ID does not exist" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.642450 4931 scope.go:117] "RemoveContainer" containerID="b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f" Jan 31 05:00:00 crc kubenswrapper[4931]: E0131 05:00:00.642892 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f\": container with ID starting with b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f not found: ID does not exist" containerID="b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.642929 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f"} err="failed to get container status \"b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f\": rpc error: code = NotFound desc = could not find container \"b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f\": container with ID starting with b83b083be583a8fe6f695606113e284604181831baa76b4a151d78a4040d424f not found: ID does not exist" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.642958 4931 scope.go:117] "RemoveContainer" containerID="7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371" Jan 31 05:00:00 crc kubenswrapper[4931]: E0131 05:00:00.643194 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371\": container with ID starting with 7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371 not found: ID does not exist" containerID="7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.643209 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371"} err="failed to get container status \"7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371\": rpc error: code = NotFound desc = could not find container \"7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371\": container with ID starting with 7c6f873791805c0078c53cb60f73b97d3895fe7cb41da9785894baa00f3dc371 not found: ID does not exist" Jan 31 05:00:00 crc kubenswrapper[4931]: I0131 05:00:00.997026 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2"] Jan 31 05:00:01 crc kubenswrapper[4931]: I0131 05:00:01.002058 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5"] Jan 31 05:00:01 crc kubenswrapper[4931]: I0131 05:00:01.068825 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4"] Jan 31 05:00:01 crc kubenswrapper[4931]: I0131 05:00:01.540601 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" event={"ID":"8fef9306-8d52-4de8-b5b5-887f46c7c817","Type":"ContainerStarted","Data":"aea74b8e85d8d4a37905f072dbdbd21862ccd3b80e27a92ff06f9c46b652c09e"} Jan 31 05:00:01 crc kubenswrapper[4931]: I0131 05:00:01.542129 4931 generic.go:334] "Generic (PLEG): container finished" podID="963ed3a4-0084-484e-8ac2-269f6443f669" containerID="3c40fa28cfd1bdb848eb0221c8d90e6c31d59cf6bab968bb7db53fd75a471ea7" exitCode=0 Jan 31 05:00:01 crc kubenswrapper[4931]: I0131 05:00:01.542171 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" event={"ID":"963ed3a4-0084-484e-8ac2-269f6443f669","Type":"ContainerDied","Data":"3c40fa28cfd1bdb848eb0221c8d90e6c31d59cf6bab968bb7db53fd75a471ea7"} Jan 31 05:00:01 crc kubenswrapper[4931]: I0131 05:00:01.542187 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" event={"ID":"963ed3a4-0084-484e-8ac2-269f6443f669","Type":"ContainerStarted","Data":"fe147d981580ac5f2cbd94672a8fd1db8bc9bec98aa8a9096cf4d61fbc538297"} Jan 31 05:00:01 crc kubenswrapper[4931]: I0131 05:00:01.543436 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" event={"ID":"ca286f64-76b9-41f4-8bd6-5daacc4f5895","Type":"ContainerStarted","Data":"d12e5af19f2fb0e378a61e9739a1a741ba431ae1fdeea24a7ab3a41b86b93531"} Jan 31 05:00:01 crc kubenswrapper[4931]: I0131 05:00:01.904863 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00fbb90b-254d-45a7-a11c-ee537f597757" path="/var/lib/kubelet/pods/00fbb90b-254d-45a7-a11c-ee537f597757/volumes" Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.551686 4931 generic.go:334] "Generic (PLEG): container finished" podID="8fef9306-8d52-4de8-b5b5-887f46c7c817" containerID="557172ffd92f76eefa53eeba98c91635b23c20fb47bf677d210de065c4f0cf6b" exitCode=0 Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.551752 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" event={"ID":"8fef9306-8d52-4de8-b5b5-887f46c7c817","Type":"ContainerDied","Data":"557172ffd92f76eefa53eeba98c91635b23c20fb47bf677d210de065c4f0cf6b"} Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.553403 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" event={"ID":"ca286f64-76b9-41f4-8bd6-5daacc4f5895","Type":"ContainerStarted","Data":"afea0b8d79dcdaabc3f2b839e3a32aaddfbc0e2e65374d5bbdfa6fd3185d651e"} Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.583476 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" podStartSLOduration=2.583460588 podStartE2EDuration="2.583460588s" podCreationTimestamp="2026-01-31 05:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:02.583355195 +0000 UTC m=+2161.392584079" watchObservedRunningTime="2026-01-31 05:00:02.583460588 +0000 UTC m=+2161.392689462" Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.837301 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.971745 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/963ed3a4-0084-484e-8ac2-269f6443f669-secret-volume\") pod \"963ed3a4-0084-484e-8ac2-269f6443f669\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.971928 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtjv7\" (UniqueName: \"kubernetes.io/projected/963ed3a4-0084-484e-8ac2-269f6443f669-kube-api-access-xtjv7\") pod \"963ed3a4-0084-484e-8ac2-269f6443f669\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.971959 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/963ed3a4-0084-484e-8ac2-269f6443f669-config-volume\") pod \"963ed3a4-0084-484e-8ac2-269f6443f669\" (UID: \"963ed3a4-0084-484e-8ac2-269f6443f669\") " Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.972994 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963ed3a4-0084-484e-8ac2-269f6443f669-config-volume" (OuterVolumeSpecName: "config-volume") pod "963ed3a4-0084-484e-8ac2-269f6443f669" (UID: "963ed3a4-0084-484e-8ac2-269f6443f669"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.991070 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963ed3a4-0084-484e-8ac2-269f6443f669-kube-api-access-xtjv7" (OuterVolumeSpecName: "kube-api-access-xtjv7") pod "963ed3a4-0084-484e-8ac2-269f6443f669" (UID: "963ed3a4-0084-484e-8ac2-269f6443f669"). InnerVolumeSpecName "kube-api-access-xtjv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:02 crc kubenswrapper[4931]: I0131 05:00:02.993000 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963ed3a4-0084-484e-8ac2-269f6443f669-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "963ed3a4-0084-484e-8ac2-269f6443f669" (UID: "963ed3a4-0084-484e-8ac2-269f6443f669"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.073585 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtjv7\" (UniqueName: \"kubernetes.io/projected/963ed3a4-0084-484e-8ac2-269f6443f669-kube-api-access-xtjv7\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.073615 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/963ed3a4-0084-484e-8ac2-269f6443f669-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.073623 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/963ed3a4-0084-484e-8ac2-269f6443f669-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.571982 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" event={"ID":"963ed3a4-0084-484e-8ac2-269f6443f669","Type":"ContainerDied","Data":"fe147d981580ac5f2cbd94672a8fd1db8bc9bec98aa8a9096cf4d61fbc538297"} Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.572355 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe147d981580ac5f2cbd94672a8fd1db8bc9bec98aa8a9096cf4d61fbc538297" Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.572418 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xpwt4" Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.581841 4931 generic.go:334] "Generic (PLEG): container finished" podID="ca286f64-76b9-41f4-8bd6-5daacc4f5895" containerID="afea0b8d79dcdaabc3f2b839e3a32aaddfbc0e2e65374d5bbdfa6fd3185d651e" exitCode=0 Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.581921 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" event={"ID":"ca286f64-76b9-41f4-8bd6-5daacc4f5895","Type":"ContainerDied","Data":"afea0b8d79dcdaabc3f2b839e3a32aaddfbc0e2e65374d5bbdfa6fd3185d651e"} Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.916430 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm"] Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.916680 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-j87zm"] Jan 31 05:00:03 crc kubenswrapper[4931]: I0131 05:00:03.940659 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.105946 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/8fef9306-8d52-4de8-b5b5-887f46c7c817-image-cache-config-data\") pod \"8fef9306-8d52-4de8-b5b5-887f46c7c817\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.106049 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8fef9306-8d52-4de8-b5b5-887f46c7c817\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.106118 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdb7t\" (UniqueName: \"kubernetes.io/projected/8fef9306-8d52-4de8-b5b5-887f46c7c817-kube-api-access-qdb7t\") pod \"8fef9306-8d52-4de8-b5b5-887f46c7c817\" (UID: \"8fef9306-8d52-4de8-b5b5-887f46c7c817\") " Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.116876 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance-cache") pod "8fef9306-8d52-4de8-b5b5-887f46c7c817" (UID: "8fef9306-8d52-4de8-b5b5-887f46c7c817"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.116878 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fef9306-8d52-4de8-b5b5-887f46c7c817-kube-api-access-qdb7t" (OuterVolumeSpecName: "kube-api-access-qdb7t") pod "8fef9306-8d52-4de8-b5b5-887f46c7c817" (UID: "8fef9306-8d52-4de8-b5b5-887f46c7c817"). InnerVolumeSpecName "kube-api-access-qdb7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.117165 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fef9306-8d52-4de8-b5b5-887f46c7c817-image-cache-config-data" (OuterVolumeSpecName: "image-cache-config-data") pod "8fef9306-8d52-4de8-b5b5-887f46c7c817" (UID: "8fef9306-8d52-4de8-b5b5-887f46c7c817"). InnerVolumeSpecName "image-cache-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.207840 4931 reconciler_common.go:293] "Volume detached for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/8fef9306-8d52-4de8-b5b5-887f46c7c817-image-cache-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.207870 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdb7t\" (UniqueName: \"kubernetes.io/projected/8fef9306-8d52-4de8-b5b5-887f46c7c817-kube-api-access-qdb7t\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.590542 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.590813 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-external-api-0-cleaner-2949726r7kn5" event={"ID":"8fef9306-8d52-4de8-b5b5-887f46c7c817","Type":"ContainerDied","Data":"aea74b8e85d8d4a37905f072dbdbd21862ccd3b80e27a92ff06f9c46b652c09e"} Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.590869 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea74b8e85d8d4a37905f072dbdbd21862ccd3b80e27a92ff06f9c46b652c09e" Jan 31 05:00:04 crc kubenswrapper[4931]: I0131 05:00:04.869857 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.018439 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.018530 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2zj7\" (UniqueName: \"kubernetes.io/projected/ca286f64-76b9-41f4-8bd6-5daacc4f5895-kube-api-access-r2zj7\") pod \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.018579 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/ca286f64-76b9-41f4-8bd6-5daacc4f5895-image-cache-config-data\") pod \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\" (UID: \"ca286f64-76b9-41f4-8bd6-5daacc4f5895\") " Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.021711 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "ca286f64-76b9-41f4-8bd6-5daacc4f5895" (UID: "ca286f64-76b9-41f4-8bd6-5daacc4f5895"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.021870 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca286f64-76b9-41f4-8bd6-5daacc4f5895-kube-api-access-r2zj7" (OuterVolumeSpecName: "kube-api-access-r2zj7") pod "ca286f64-76b9-41f4-8bd6-5daacc4f5895" (UID: "ca286f64-76b9-41f4-8bd6-5daacc4f5895"). InnerVolumeSpecName "kube-api-access-r2zj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.023386 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca286f64-76b9-41f4-8bd6-5daacc4f5895-image-cache-config-data" (OuterVolumeSpecName: "image-cache-config-data") pod "ca286f64-76b9-41f4-8bd6-5daacc4f5895" (UID: "ca286f64-76b9-41f4-8bd6-5daacc4f5895"). InnerVolumeSpecName "image-cache-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.120427 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2zj7\" (UniqueName: \"kubernetes.io/projected/ca286f64-76b9-41f4-8bd6-5daacc4f5895-kube-api-access-r2zj7\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.120708 4931 reconciler_common.go:293] "Volume detached for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/ca286f64-76b9-41f4-8bd6-5daacc4f5895-image-cache-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.600659 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" event={"ID":"ca286f64-76b9-41f4-8bd6-5daacc4f5895","Type":"ContainerDied","Data":"d12e5af19f2fb0e378a61e9739a1a741ba431ae1fdeea24a7ab3a41b86b93531"} Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.600698 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d12e5af19f2fb0e378a61e9739a1a741ba431ae1fdeea24a7ab3a41b86b93531" Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.600818 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-internal-api-0-cleaner-2949726fx2m2" Jan 31 05:00:05 crc kubenswrapper[4931]: I0131 05:00:05.908379 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7595206-8944-4009-bcd7-f9952d225277" path="/var/lib/kubelet/pods/d7595206-8944-4009-bcd7-f9952d225277/volumes" Jan 31 05:00:21 crc kubenswrapper[4931]: I0131 05:00:21.133667 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:00:21 crc kubenswrapper[4931]: I0131 05:00:21.139399 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:00:51 crc kubenswrapper[4931]: I0131 05:00:51.133901 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:00:51 crc kubenswrapper[4931]: I0131 05:00:51.135048 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.155890 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-cron-29497261-b8dq9"] Jan 31 05:01:00 crc kubenswrapper[4931]: E0131 05:01:00.156818 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963ed3a4-0084-484e-8ac2-269f6443f669" containerName="collect-profiles" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.156840 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="963ed3a4-0084-484e-8ac2-269f6443f669" containerName="collect-profiles" Jan 31 05:01:00 crc kubenswrapper[4931]: E0131 05:01:00.156877 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fef9306-8d52-4de8-b5b5-887f46c7c817" containerName="glance-cache-glance-default-external-api-0-cleaner" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.156891 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fef9306-8d52-4de8-b5b5-887f46c7c817" containerName="glance-cache-glance-default-external-api-0-cleaner" Jan 31 05:01:00 crc kubenswrapper[4931]: E0131 05:01:00.156926 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca286f64-76b9-41f4-8bd6-5daacc4f5895" containerName="glance-cache-glance-default-internal-api-0-cleaner" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.156937 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca286f64-76b9-41f4-8bd6-5daacc4f5895" containerName="glance-cache-glance-default-internal-api-0-cleaner" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.157149 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="963ed3a4-0084-484e-8ac2-269f6443f669" containerName="collect-profiles" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.157164 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fef9306-8d52-4de8-b5b5-887f46c7c817" containerName="glance-cache-glance-default-external-api-0-cleaner" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.157197 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca286f64-76b9-41f4-8bd6-5daacc4f5895" containerName="glance-cache-glance-default-internal-api-0-cleaner" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.159045 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.185361 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-cron-29497261-b8dq9"] Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.275972 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-config-data\") pod \"keystone-cron-29497261-b8dq9\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.276051 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z29c\" (UniqueName: \"kubernetes.io/projected/253479c1-555f-415b-994b-3fb157ff3535-kube-api-access-5z29c\") pod \"keystone-cron-29497261-b8dq9\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.276172 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-fernet-keys\") pod \"keystone-cron-29497261-b8dq9\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.377846 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-fernet-keys\") pod \"keystone-cron-29497261-b8dq9\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.377986 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-config-data\") pod \"keystone-cron-29497261-b8dq9\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.378044 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z29c\" (UniqueName: \"kubernetes.io/projected/253479c1-555f-415b-994b-3fb157ff3535-kube-api-access-5z29c\") pod \"keystone-cron-29497261-b8dq9\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.389041 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-config-data\") pod \"keystone-cron-29497261-b8dq9\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.390211 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-fernet-keys\") pod \"keystone-cron-29497261-b8dq9\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.403498 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z29c\" (UniqueName: \"kubernetes.io/projected/253479c1-555f-415b-994b-3fb157ff3535-kube-api-access-5z29c\") pod \"keystone-cron-29497261-b8dq9\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.486930 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:00 crc kubenswrapper[4931]: I0131 05:01:00.910591 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-cron-29497261-b8dq9"] Jan 31 05:01:01 crc kubenswrapper[4931]: I0131 05:01:01.079417 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" event={"ID":"253479c1-555f-415b-994b-3fb157ff3535","Type":"ContainerStarted","Data":"127ae2d4bbc3c1986feff0e517a54c810fabd2b10f392b437ab86b5b2af9f3ba"} Jan 31 05:01:01 crc kubenswrapper[4931]: I0131 05:01:01.079466 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" event={"ID":"253479c1-555f-415b-994b-3fb157ff3535","Type":"ContainerStarted","Data":"005cceb68c4d1677b6d337e0d607067c9d13e5b9aafe2ec79a9f78677c3768e2"} Jan 31 05:01:01 crc kubenswrapper[4931]: I0131 05:01:01.098626 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" podStartSLOduration=1.098604698 podStartE2EDuration="1.098604698s" podCreationTimestamp="2026-01-31 05:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:01.093591897 +0000 UTC m=+2219.902820771" watchObservedRunningTime="2026-01-31 05:01:01.098604698 +0000 UTC m=+2219.907833592" Jan 31 05:01:03 crc kubenswrapper[4931]: I0131 05:01:03.096554 4931 generic.go:334] "Generic (PLEG): container finished" podID="253479c1-555f-415b-994b-3fb157ff3535" containerID="127ae2d4bbc3c1986feff0e517a54c810fabd2b10f392b437ab86b5b2af9f3ba" exitCode=0 Jan 31 05:01:03 crc kubenswrapper[4931]: I0131 05:01:03.096665 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" event={"ID":"253479c1-555f-415b-994b-3fb157ff3535","Type":"ContainerDied","Data":"127ae2d4bbc3c1986feff0e517a54c810fabd2b10f392b437ab86b5b2af9f3ba"} Jan 31 05:01:03 crc kubenswrapper[4931]: I0131 05:01:03.865123 4931 scope.go:117] "RemoveContainer" containerID="a4011bd5eb0a3c688cdb4edaa982f67b2230093e333d48081f768b904e4254e0" Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.413056 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.571861 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z29c\" (UniqueName: \"kubernetes.io/projected/253479c1-555f-415b-994b-3fb157ff3535-kube-api-access-5z29c\") pod \"253479c1-555f-415b-994b-3fb157ff3535\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.571907 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-config-data\") pod \"253479c1-555f-415b-994b-3fb157ff3535\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.571984 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-fernet-keys\") pod \"253479c1-555f-415b-994b-3fb157ff3535\" (UID: \"253479c1-555f-415b-994b-3fb157ff3535\") " Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.578179 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "253479c1-555f-415b-994b-3fb157ff3535" (UID: "253479c1-555f-415b-994b-3fb157ff3535"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.592249 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253479c1-555f-415b-994b-3fb157ff3535-kube-api-access-5z29c" (OuterVolumeSpecName: "kube-api-access-5z29c") pod "253479c1-555f-415b-994b-3fb157ff3535" (UID: "253479c1-555f-415b-994b-3fb157ff3535"). InnerVolumeSpecName "kube-api-access-5z29c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.631303 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-config-data" (OuterVolumeSpecName: "config-data") pod "253479c1-555f-415b-994b-3fb157ff3535" (UID: "253479c1-555f-415b-994b-3fb157ff3535"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.673672 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z29c\" (UniqueName: \"kubernetes.io/projected/253479c1-555f-415b-994b-3fb157ff3535-kube-api-access-5z29c\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.673755 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:04 crc kubenswrapper[4931]: I0131 05:01:04.673776 4931 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/253479c1-555f-415b-994b-3fb157ff3535-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:05 crc kubenswrapper[4931]: I0131 05:01:05.112252 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" event={"ID":"253479c1-555f-415b-994b-3fb157ff3535","Type":"ContainerDied","Data":"005cceb68c4d1677b6d337e0d607067c9d13e5b9aafe2ec79a9f78677c3768e2"} Jan 31 05:01:05 crc kubenswrapper[4931]: I0131 05:01:05.112292 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="005cceb68c4d1677b6d337e0d607067c9d13e5b9aafe2ec79a9f78677c3768e2" Jan 31 05:01:05 crc kubenswrapper[4931]: I0131 05:01:05.112342 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-cron-29497261-b8dq9" Jan 31 05:01:12 crc kubenswrapper[4931]: I0131 05:01:12.953654 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lg82c"] Jan 31 05:01:12 crc kubenswrapper[4931]: E0131 05:01:12.954875 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253479c1-555f-415b-994b-3fb157ff3535" containerName="keystone-cron" Jan 31 05:01:12 crc kubenswrapper[4931]: I0131 05:01:12.954897 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="253479c1-555f-415b-994b-3fb157ff3535" containerName="keystone-cron" Jan 31 05:01:12 crc kubenswrapper[4931]: I0131 05:01:12.955183 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="253479c1-555f-415b-994b-3fb157ff3535" containerName="keystone-cron" Jan 31 05:01:12 crc kubenswrapper[4931]: I0131 05:01:12.956751 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:12 crc kubenswrapper[4931]: I0131 05:01:12.984968 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lg82c"] Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.129823 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-catalog-content\") pod \"community-operators-lg82c\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.129862 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-utilities\") pod \"community-operators-lg82c\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.129999 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4lmt\" (UniqueName: \"kubernetes.io/projected/6bf0f465-a508-4cf0-b5ba-1d17143532f5-kube-api-access-t4lmt\") pod \"community-operators-lg82c\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.231154 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-catalog-content\") pod \"community-operators-lg82c\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.231209 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-utilities\") pod \"community-operators-lg82c\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.231334 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4lmt\" (UniqueName: \"kubernetes.io/projected/6bf0f465-a508-4cf0-b5ba-1d17143532f5-kube-api-access-t4lmt\") pod \"community-operators-lg82c\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.231785 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-catalog-content\") pod \"community-operators-lg82c\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.231829 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-utilities\") pod \"community-operators-lg82c\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.249233 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4lmt\" (UniqueName: \"kubernetes.io/projected/6bf0f465-a508-4cf0-b5ba-1d17143532f5-kube-api-access-t4lmt\") pod \"community-operators-lg82c\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.284343 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:13 crc kubenswrapper[4931]: I0131 05:01:13.771760 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lg82c"] Jan 31 05:01:14 crc kubenswrapper[4931]: I0131 05:01:14.188193 4931 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f465-a508-4cf0-b5ba-1d17143532f5" containerID="cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae" exitCode=0 Jan 31 05:01:14 crc kubenswrapper[4931]: I0131 05:01:14.188262 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg82c" event={"ID":"6bf0f465-a508-4cf0-b5ba-1d17143532f5","Type":"ContainerDied","Data":"cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae"} Jan 31 05:01:14 crc kubenswrapper[4931]: I0131 05:01:14.188591 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg82c" event={"ID":"6bf0f465-a508-4cf0-b5ba-1d17143532f5","Type":"ContainerStarted","Data":"8f63eadcb2a511603d45d1a369be13b9b06cce9d0415f97126e12348b30c68fa"} Jan 31 05:01:15 crc kubenswrapper[4931]: I0131 05:01:15.200071 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg82c" event={"ID":"6bf0f465-a508-4cf0-b5ba-1d17143532f5","Type":"ContainerStarted","Data":"50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4"} Jan 31 05:01:16 crc kubenswrapper[4931]: I0131 05:01:16.210896 4931 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f465-a508-4cf0-b5ba-1d17143532f5" containerID="50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4" exitCode=0 Jan 31 05:01:16 crc kubenswrapper[4931]: I0131 05:01:16.210959 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg82c" event={"ID":"6bf0f465-a508-4cf0-b5ba-1d17143532f5","Type":"ContainerDied","Data":"50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4"} Jan 31 05:01:17 crc kubenswrapper[4931]: I0131 05:01:17.220240 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg82c" event={"ID":"6bf0f465-a508-4cf0-b5ba-1d17143532f5","Type":"ContainerStarted","Data":"3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01"} Jan 31 05:01:17 crc kubenswrapper[4931]: I0131 05:01:17.242417 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lg82c" podStartSLOduration=2.710577006 podStartE2EDuration="5.24239828s" podCreationTimestamp="2026-01-31 05:01:12 +0000 UTC" firstStartedPulling="2026-01-31 05:01:14.189426892 +0000 UTC m=+2232.998655766" lastFinishedPulling="2026-01-31 05:01:16.721248166 +0000 UTC m=+2235.530477040" observedRunningTime="2026-01-31 05:01:17.234667082 +0000 UTC m=+2236.043895966" watchObservedRunningTime="2026-01-31 05:01:17.24239828 +0000 UTC m=+2236.051627144" Jan 31 05:01:21 crc kubenswrapper[4931]: I0131 05:01:21.133792 4931 patch_prober.go:28] interesting pod/machine-config-daemon-pcg8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:01:21 crc kubenswrapper[4931]: I0131 05:01:21.134146 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:01:21 crc kubenswrapper[4931]: I0131 05:01:21.134197 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" Jan 31 05:01:21 crc kubenswrapper[4931]: I0131 05:01:21.134879 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f5f88ecd69f58f7a273fb52ae29b1634585278a7a92f8742797538314c31e33"} pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:01:21 crc kubenswrapper[4931]: I0131 05:01:21.134948 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" containerName="machine-config-daemon" containerID="cri-o://9f5f88ecd69f58f7a273fb52ae29b1634585278a7a92f8742797538314c31e33" gracePeriod=600 Jan 31 05:01:21 crc kubenswrapper[4931]: E0131 05:01:21.275671 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 05:01:22 crc kubenswrapper[4931]: I0131 05:01:22.278055 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7d60e8b-e113-470f-93ff-a8a795074642" containerID="9f5f88ecd69f58f7a273fb52ae29b1634585278a7a92f8742797538314c31e33" exitCode=0 Jan 31 05:01:22 crc kubenswrapper[4931]: I0131 05:01:22.278141 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" event={"ID":"c7d60e8b-e113-470f-93ff-a8a795074642","Type":"ContainerDied","Data":"9f5f88ecd69f58f7a273fb52ae29b1634585278a7a92f8742797538314c31e33"} Jan 31 05:01:22 crc kubenswrapper[4931]: I0131 05:01:22.278224 4931 scope.go:117] "RemoveContainer" containerID="047e2f36082083d6896ae43a3cede568805e5bab324b0207cfddda334c9e723f" Jan 31 05:01:22 crc kubenswrapper[4931]: I0131 05:01:22.279044 4931 scope.go:117] "RemoveContainer" containerID="9f5f88ecd69f58f7a273fb52ae29b1634585278a7a92f8742797538314c31e33" Jan 31 05:01:22 crc kubenswrapper[4931]: E0131 05:01:22.280994 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcg8z_openshift-machine-config-operator(c7d60e8b-e113-470f-93ff-a8a795074642)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcg8z" podUID="c7d60e8b-e113-470f-93ff-a8a795074642" Jan 31 05:01:23 crc kubenswrapper[4931]: I0131 05:01:23.284682 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:23 crc kubenswrapper[4931]: I0131 05:01:23.285022 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:23 crc kubenswrapper[4931]: I0131 05:01:23.350212 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:24 crc kubenswrapper[4931]: I0131 05:01:24.371612 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:24 crc kubenswrapper[4931]: I0131 05:01:24.440177 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lg82c"] Jan 31 05:01:26 crc kubenswrapper[4931]: I0131 05:01:26.317249 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lg82c" podUID="6bf0f465-a508-4cf0-b5ba-1d17143532f5" containerName="registry-server" containerID="cri-o://3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01" gracePeriod=2 Jan 31 05:01:26 crc kubenswrapper[4931]: I0131 05:01:26.786050 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:26 crc kubenswrapper[4931]: I0131 05:01:26.971323 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-utilities\") pod \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " Jan 31 05:01:26 crc kubenswrapper[4931]: I0131 05:01:26.971828 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4lmt\" (UniqueName: \"kubernetes.io/projected/6bf0f465-a508-4cf0-b5ba-1d17143532f5-kube-api-access-t4lmt\") pod \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " Jan 31 05:01:26 crc kubenswrapper[4931]: I0131 05:01:26.971945 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-catalog-content\") pod \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\" (UID: \"6bf0f465-a508-4cf0-b5ba-1d17143532f5\") " Jan 31 05:01:26 crc kubenswrapper[4931]: I0131 05:01:26.973769 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-utilities" (OuterVolumeSpecName: "utilities") pod "6bf0f465-a508-4cf0-b5ba-1d17143532f5" (UID: "6bf0f465-a508-4cf0-b5ba-1d17143532f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:01:26 crc kubenswrapper[4931]: I0131 05:01:26.978563 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf0f465-a508-4cf0-b5ba-1d17143532f5-kube-api-access-t4lmt" (OuterVolumeSpecName: "kube-api-access-t4lmt") pod "6bf0f465-a508-4cf0-b5ba-1d17143532f5" (UID: "6bf0f465-a508-4cf0-b5ba-1d17143532f5"). InnerVolumeSpecName "kube-api-access-t4lmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.044742 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bf0f465-a508-4cf0-b5ba-1d17143532f5" (UID: "6bf0f465-a508-4cf0-b5ba-1d17143532f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.073815 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.073845 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f465-a508-4cf0-b5ba-1d17143532f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.073855 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4lmt\" (UniqueName: \"kubernetes.io/projected/6bf0f465-a508-4cf0-b5ba-1d17143532f5-kube-api-access-t4lmt\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.341435 4931 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f465-a508-4cf0-b5ba-1d17143532f5" containerID="3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01" exitCode=0 Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.341481 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg82c" event={"ID":"6bf0f465-a508-4cf0-b5ba-1d17143532f5","Type":"ContainerDied","Data":"3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01"} Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.341506 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg82c" event={"ID":"6bf0f465-a508-4cf0-b5ba-1d17143532f5","Type":"ContainerDied","Data":"8f63eadcb2a511603d45d1a369be13b9b06cce9d0415f97126e12348b30c68fa"} Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.341521 4931 scope.go:117] "RemoveContainer" containerID="3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.341635 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lg82c" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.364060 4931 scope.go:117] "RemoveContainer" containerID="50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.384605 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lg82c"] Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.390835 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lg82c"] Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.394084 4931 scope.go:117] "RemoveContainer" containerID="cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.423955 4931 scope.go:117] "RemoveContainer" containerID="3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01" Jan 31 05:01:27 crc kubenswrapper[4931]: E0131 05:01:27.424356 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01\": container with ID starting with 3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01 not found: ID does not exist" containerID="3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.424389 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01"} err="failed to get container status \"3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01\": rpc error: code = NotFound desc = could not find container \"3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01\": container with ID starting with 3f1ab006b67fa3fccd0d0e91f0db30c8bf81fd67d6f26cf4b08fd8f69590ed01 not found: ID does not exist" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.424410 4931 scope.go:117] "RemoveContainer" containerID="50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4" Jan 31 05:01:27 crc kubenswrapper[4931]: E0131 05:01:27.424739 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4\": container with ID starting with 50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4 not found: ID does not exist" containerID="50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.424796 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4"} err="failed to get container status \"50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4\": rpc error: code = NotFound desc = could not find container \"50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4\": container with ID starting with 50e53cffdbda63c9138ca7080ad6cb5acc719d255a930a40512c225964399fc4 not found: ID does not exist" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.424813 4931 scope.go:117] "RemoveContainer" containerID="cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae" Jan 31 05:01:27 crc kubenswrapper[4931]: E0131 05:01:27.425014 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae\": container with ID starting with cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae not found: ID does not exist" containerID="cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.425033 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae"} err="failed to get container status \"cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae\": rpc error: code = NotFound desc = could not find container \"cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae\": container with ID starting with cb442ad7196ffbc5b19b3d3a27e9c38e4d8d784a281a0bd23e8a71caf3dd71ae not found: ID does not exist" Jan 31 05:01:27 crc kubenswrapper[4931]: I0131 05:01:27.906187 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf0f465-a508-4cf0-b5ba-1d17143532f5" path="/var/lib/kubelet/pods/6bf0f465-a508-4cf0-b5ba-1d17143532f5/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137306265024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137306266017373 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137301463016510 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137301463015460 5ustar corecore